This blog post is part of a series on "deploying a Spring Boot and Angular application on Azure", here is the full list of posts:
- Creating a Spring Boot and Angular application for Azure (1/7)
- Creating and configuring Azure Web App and MySQL to host a Spring Boot application (2/7)
- Using Azure Pipelines to build, test and deploy a Spring Boot and Angular application (3/7)
- Using Azure Application Insights with Spring Boot (4/7)
- Using Azure Application Insights with Angular (5/7)
- Configuring Azure CDN to boost Angular performance (6/7)
- Configuring Azure Redis Cache to boost Spring Boot performance (7/7)
Scaling out with Azure Web Apps
Since the beginnings of this series, we have used the "scale up" option of Azure Web Apps. As we have seen on part 3, we are able to change the instance that we use on-the-fly, so we can scale up (or down) depending on our load and budget. This is also why we set up Azure Application Insights on part 4, in order to better understand our needs.
Now, a better solution for scaling would be to "Scale out": instead of replacing our instance by a bigger one, we are going to add more instances. This allows for greater scalability (at some point you can't buy a bigger instance...), and is much more cost-efficient as we will only run the number of instances we need, automatically.
For this, Azure proposes a "Scale out" option which is just under the "Scale up" option, and where you will set up rules to scale out. For example, here we have defined a rule: when 70% of the CPU is used for more than 10 minutes, then we'll automatically launch new instances, with a maximum of 20 instances:
This setup will allow our application to scale up (and down) automatically, depending on our workload. But this is only one part of the problem.
The database is the problem
Scaling out is awesome, but most of the time performance issues come from the data store. As we have worked on this issue for decades, this is the reason why the JHipster team focuses so much on using a cache, as it's the best way to eliminate this data store issue.
Let's see what our options are in terms of data store performance:
- We can buy a more expensive instance for our data store: of course this will work, at least for some time, like the "scale up" mechanism we studied with Azure Web App. The main issue here is that we wanted to be budget-conscious, and a quick check on the prices of high-end database instances will make you look for other solutions.
- We can use NoSQL databases, typicpally CosmosDB. CosmosDB is a distributed, very efficient database that can scale out automatically. This is a great option, but it will require to change our current database and to depend on Azure: this is not the topic of this series, but be sure we will do a specific CosmosDB post in the future, as it's a very exciting technology.
- Use the Hibernate 2nd level cache: this is the caching mechanism included inside Hibernate, and it is extremely powerful when used correctly. This would be our recommended solution, however the issue here is that we are going to scale out (see previous section), and that requires us to have a distributed cache.
- Use the Spring Cache abstraction: it is a very simple mechanism, yet very powerful, that allows you to cache method calls. This can be more powerful than the Hibernate 2nd level cache when used correctly, as it works at the business level, so it can cache more complex objects. As for Hibernate, this will require us to use a distributed caching solution.
The last two options are clearly the best for our use-case, but they both require us to have a distributed caching solution, otherwise as soon as our Azure Web App instance scales out, we will see some invalid data.
Caching options and issues
In Java, we usually have 3 kind of caches:
- Local-JVM caches: they are the fastest, as getting data is usually just using a pointer to that data, which is directly available in memory. But they take memory from the JVM heap: as your cache grows, the JVM's garbage collector will have more and more trouble to pass, resulting in poor application performance.
- Off-heap caches: they run in another process, next to the JVM. So they do not use the network, yet they require data to be serialized/unserialized as it moves between two processes, which is quite costly. So this cache is way slower than the local-JVM cache, but it can grow to gigabytes of data without affecting the JVM.
- Remote caches: they are like off-heap caches, but run on another server. This usually allows to have even bigger caches (as they are set up on specific high-memory instances), but will be slower as they require a network access.
Several well-known solutions exist, and are typically supported by JHipster:
- Ehcache, which unfortunately doesn't have an API to scale out (you can't add new nodes once the cluster is created).
- Hazelcast and Infinispan, which can scale using an API, but which require a mechanism so new nodes register to the existing cluster. This is what JHipster provides with the JHipster Registry.
Here, we are going to use Redis, which is a well-known in-memory data store that is often used as a cache. Compared to Hazelcast and Infinispan, it has some unique options:
- It is only used as a "remote cache", so it can store more data, and it will not pollute your JVM, but it will be slower for very frequently-used data.
- As a managed service, it will "scale out" automatically, removing the need for something like the JHipster Registry to have nodes register in the cluster.
- It is fully Open Source, so you can use it to store huge amount of data without needing to pay for the "enterprise version".
What is Azure Cache for Redis
Azure Cache for Redis is a fully-managed Redis cache, hosted by Azure.
It has a very inexpensive first tier, as you can get a cache for about $16 per month, and of course it can grow to a huge, distributed cache with geo-replication if you spend more money. You can find all pricing details here.
For our current use-case, it's a very good choice as we tried to be budget-conscious, and as it can scale out without any trouble for us.
If you want more information, here is the complete documentation on Azure Cache for Redis.
Setting up an Azure Cache for Redis is easy using the Azure portal, just use the search box to create it (and be careful to select to correct instance, as prices can go up quickly!):
Redis with the Hibernate 2nd level cache and the Spring Cache Abstraction
Using Redis, there are several Java libraries available both to support Hibernate 2nd level cache and the Spring Cache abstraction:
- Redisson supports both Hibernate 2nd level cache and the Spring Cache abstraction. If you want to unlock its best features, you will need to buy the "pro" version, but the Open Source edition is already enough for most needs.
- Jedis is a very well-known Redis client, that only supports the Spring Cache abstraction. It is often seen as the "de facto" standard Redis client in Java.
- Lettuce is the default library used in Spring Boot, and only supports Spring Cache. It is based on Netty and is thread-safe (unlike Jedis), so it can arguably support more connections and is more performant.
For this post, we will use Lettuce as it is the default option with Spring Boot, and as it seems indeed to be the best fully open-source option.
Configuring Spring Boot and Azure Cache for Redis
First we added the spring-boot-starter-data-redis
library to our pom.xml
:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Then we added a specific Spring Boot configuration class, so Redis only works in production mode. In development mode, everything will work the same without the cache, and that will make our development setup easier:
package io.github.jdubois.bugtracker.config;
import io.github.jhipster.config.JHipsterConstants;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
@Configuration
@EnableCaching
@Profile(JHipsterConstants.SPRING_PROFILE_PRODUCTION)
public class CacheConfiguration {
}
And we configured Redis in our application-prod.yml
Spring Boot configuration file:
spring:
cache:
type: redis
redis:
ssl: true
host: spring-on-azure.redis.cache.windows.net
port: 6380
password: Y27iYghxMcY1qoRVyjQkZExS8RaHdln4QfLsqHRNsHE=
Important note : many people use unsecured Redis instances, typically because their Redis library doesn't support SSL. The above Yaml configuration shows you how to do it correctly: please note that Azure Cache for Redis uses port 6300 for SSL, and also that its non-secured port is disabled by default (but there is no excuse to enable it now!).
In order to test our code, we also added some cache in a REST method of the ProjectResource
class:
@GetMapping("/projects")
@Cacheable("projects")
public List<Project> getAllProjects() {
log.error("REST request to get all Projects");
return projectRepository.findAll();
}
This is only for testing, as you will quickly have cache invalidation issues with this code (you'll need to use the @CacheEvict
annotation to have a working cache), but it will make it easy for you to test that your cache works well.
Once this is done, you should be able to monitor your cache through the Azure portal:
All changes done for this blog post are available on this commit.
Top comments (1)
I am using Azure cache and I am able to connect with all provided detail here. But I am unable to find that How to connect Redis using proxy settings in you client-side?