DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Related

  • Redis Is Not Just a Cache
  • Implement Hibernate Second-Level Cache With NCache
  • Caching RESTful API Requests With Heroku Data for Redis
  • Simple Sophisticated Object Cache Service Using Azure Redis

Trending

  • Monolith: The Good, The Bad and The Ugly
  • AI Agents: A New Era for Integration Professionals
  • SaaS in an Enterprise - An Implementation Roadmap
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 2
  1. DZone
  2. Data Engineering
  3. Data
  4. Scaling Databases With EclipseLink And Redis

Scaling Databases With EclipseLink And Redis

Integrating EclipseLink with Redis using database record as cache entry.

By 
Long Le user avatar
Long Le
·
Jan. 06, 21 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
5.8K Views

Join the DZone community and get the full member experience.

Join For Free

Overview

EclipseLink has two types of caches: the shared cache (L2) maintains objects read from database; and the isolated cache (L1) holds objects for various operations during the lifecycle of a transaction. L2 lifecycle is tied to a particular JVM and spans multiple transactions. Cache coordination between different JVMs is off by default. EclipseLink provides a distributed cache coordination feature that you can enable to ensure data in distributed applications remains current. Both L1 and L2 cache store domain objects.

“Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams.” — redis.io

This article is about EclipseLink and Redis but the concept can be applied to any ORM and Distributed Cache libraries.

Challenges

Unlike Hibernate with out-of-the-box support for L2 integration with Redis, there’s no equivalent support for EclipseLink as far as L2 integration with distributed cache.

EclipseLink does provide CacheInterceptor class with several APIs that developers, theoretically, can implement to intercept various operations on EclipseLink cache. Unfortunately, these APIs are not well documented and not easy to implement so you don’t see any open source libraries supporting EclipseLink L2 integration with Redis yet.

Solution

The good news is there is a much easier and simpler approach to integrate EclipseLink with Redis than going through CacheInterceptor interface. This approach uses cache-aside pattern to read data and store database record as a cache entry. We’ve been using this approach in production at Intuit for QuickBook Online Payroll to help scale our database and improve application performance. It has been a great success.

Cache-aside

  1. When your application needs to read data from the database, it’ll check Redis (L3) first to see if data is available
  2. If the data is available (a cache hit), the cached data returned
  3. If the data is not (a cache miss), the database is queried for data. The cache will be populated and data returned to the caller

Database Record

DatabaseRecord is an object in EclipseLink that represents a database row as field value pairs. A DatabaseRecord provides data to one or many domain objects. EclipseLink has APIs to build domain objects from DatabaseRecord.

Domain objects are used by L2 cache. Using DatabaseRecord as cache entry simplifies the implementation greatly because we don’t have to worry about maintaining a domain object relationship. The primary key can be used together with domain classname to create a cached key. At a conceptual level, DatabaseRecord is similar to a database table row. The important point is this approach caches data, not the object tree.

Here is the conceptual Read flow
Read flow


For implementation, we use a combination of AspectJ to hook into EclipseLink lifecycle to intercept Read/Write operation for populating and invalidating cache

Enough Talk, Show Me the Code

DatabaseRecordAspect.java: This class would intercept the selectOneRow and selectAllRows methods used by EclipseLink to read one object and read the collection of objects

Java
 




xxxxxxxxxx
1
21


 
1
@Aspect
2
public class DatabaseRecordAspect {
3
    @Pointcut("execution(org.eclipse.persistence.internal.sessions.AbstractRecord org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRow())")
4
    public void selectOneRow(){
5
    }
6

          
7
    @Pointcut("execution(java.util.Vector org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectAllRows())")
8
    public void selectAllRows(){
9
    }
10

          
11

          
12
    @Around("selectOneRow() && this(expressionQueryMechanism)")
13
    public AbstractRecord aroundSelectOneRow(ProceedingJoinPoint thisJoinPoint, ExpressionQueryMechanism expressionQueryMechanism) throws Throwable {
14
        return new DatabaseRecordInterceptor().handleSelectOneRow(thisJoinPoint,                        expressionQueryMechanism);
15
    }
16

          
17
    @Around("selectAllRows() && this(expressionQueryMechanism)")
18
    public Vector aroundSelectAllRows(ProceedingJoinPoint thisJoinPoint, ExpressionQueryMechanism expressionQueryMechanism) throws Throwable {
19
        return new DatabaseRecordInterceptor().handleSelectAllRows(thisJoinPoint,                       expressionQueryMechanism);
20
    }
21
}



DatabaseRecordInterceptor.java: Responsible for intercepting ExpressionQueryMechanism.selectOneRow() and ExpressionQueryMechanism.selectAllRows() to cache DatabaseRecord before results are translated into EclipseLink objects

Java
 




xxxxxxxxxx
1
141


 
1
    // This could be Redis, Memcache, Apache Ignite...
2
    private final Cache cache;
3

          
4
    public DatabaseRecordInterceptor() {
5
        cache = CacheFactory.getInstance(CacheType.REDIS);
6
    }
7
/**
8
 * Intercept ExpressionQueryMechanism.selectOneRow() to cache AbstractRecord
9
 *
10
 * @param thisJoinPoint            Aspect join point
11
 * @param expressionQueryMechanism Query Object for a given query
12
 * @return AbstractRecord EclipseLink database record instance
13
 */
14
public AbstractRecord handleSelectOneRow(ProceedingJoinPoint thisJoinPoint, ExpressionQueryMechanism expressionQueryMechanism) throws Throwable {
15
    ReadObjectQuery readObjectQuery = expressionQueryMechanism.getReadObjectQuery();
16
    AbstractRecord databaseRecord = null;
17
    String cachedKey;
18

          
19
    // Look up cache
20
    try {
21
        cachedKey = extractCacheKey(readObjectQuery);
22
        if (cachedKey != null) {
23
            databaseRecord = (AbstractRecord) cache.get(cachedKey);
24
        }
25
    } catch (Throwable t) {
26
        return (AbstractRecord) thisJoinPoint.proceed();
27
    }
28

          
29
    if (databaseRecord == null) { // cache miss
30
        // Proceed with database query
31
        databaseRecord = (AbstractRecord) thisJoinPoint.proceed(); 
32
        // then putting that into cache
33
        try {
34
            if (databaseRecord != null) {
35
                if (cachedKey == null) {
36
                    cachedKey = extractCacheKeyFromPrimaryKeyAndAbstractRecord(readObjectQuery, databaseRecord);
37
                }
38
            }
39
            if (cachedKey != null) {
40
                cache.put(cachedKey, databaseRecord);
41
            }
42
        } catch (Throwable t) {
43
            // handle exception
44
        } finally {
45
            return databaseRecord;
46
        }
47
    } else {
48
        return databaseRecord;
49
    }
50
}
51

          
52
/**
53
 * Intercept ExpressionQueryMechanism.handleSelectAllRows() to cache AbstractRecord. This is the case of ReadAllQuery.
54
 * EclipseLink doesn't cache the entire collection but caching individual objects in a collection by its primary key.
55
 * We're following the same algorithm to cache individual DatabaseRecord which has primary key. This is the same key
56
 * as in L2 so that when L2 is updated, we can update the corresponding DatabaseRecord correctly
57
 *
58
 * @param thisJoinPoint            Aspect join point
59
 * @param expressionQueryMechanism Query Object for a given query
60
 * @return list of AbstractRecord
61
 */
62

          
63
public Vector handleSelectAllRows(ProceedingJoinPoint thisJoinPoint, ExpressionQueryMechanism expressionQueryMechanism) throws Throwable {
64

          
65
    Vector rows = (Vector) thisJoinPoint.proceed();
66
    try {
67
        if (rows != null && rows.size() > 0) {
68
            try {
69
                ObjectLevelReadQuery readObjectQuery = (ObjectLevelReadQuery) expressionQueryMechanism.getQuery();
70

          
71
                List<NameValuePair> keyAndObjects = new ArrayList<>();
72

          
73
                if (!rows.isEmpty()) {
74
                    for (Object row : rows) {
75
                        AbstractRecord databaseRecord = (AbstractRecord) row;
76

          
77
                        String cachedKey = extractCacheKeyFromPrimaryKey(readObjectQuery,
78
                                                                         extractPrimaryKeyFromRow(readObjectQuery, databaseRecord));
79
                        // CacheKey would be null if no primary
80
                        if (cachedKey != null) {
81
                            keyAndObjects.add(new NameValuePair(cachedKey, databaseRecord));
82
                        }
83
                    }
84
                    if (keyAndObjects.size() > 0) {
85
                        // This call in batch and async
86
                        cache.put(keyAndObjects);
87
                    }
88
                }
89
            } catch (Throwable t) {
90
                // handle exception
91
            }
92
        }
93
        return rows;
94
    } catch (Throwable t) {
95
        return null;
96
    }
97
}
98

          
99

          
100
//
101
// Helpers to generate cache key from EclipseLink primary key
102
//
103

          
104
private String extractCacheKey(ReadObjectQuery readObjectQuery) {
105
    Object primaryKey;
106

          
107
    if (readObjectQuery.isPrimaryKeyQuery()) { // Query by id
108
        primaryKey = readObjectQuery.getSelectionId();
109
        if (primaryKey == null) {
110
            primaryKey = readObjectQuery.getDescriptor().getObjectBuilder().extractPrimaryKeyFromObject(readObjectQuery.getSelectionObject(), readObjectQuery.getSession());
111
        }
112
        return extractCacheKeyFromPrimaryKey(readObjectQuery, primaryKey.toString());
113
    } else {
114
        AbstractRecord translationRow = readObjectQuery.getTranslationRow();
115
        primaryKey = extractPrimaryKeyFromRow(readObjectQuery, translationRow);
116
        return extractCacheKeyFromPrimaryKey(readObjectQuery, primaryKey);
117
    }
118
}
119

          
120
private String extractCacheKeyFromPrimaryKey(ObjectLevelReadQuery readObjectQuery, Object primaryKey) {
121
    if (primaryKey != null) {
122
        return makeCacheKey(primaryKey, readObjectQuery.getDescriptor());
123
    } else {
124
        return null;
125
    }
126
}
127

          
128
private Object extractPrimaryKeyFromRow(ObjectLevelReadQuery readObjectQuery, AbstractRecord row) {
129
    return readObjectQuery.getDescriptor().getObjectBuilder().extractPrimaryKeyFromRow(row, readObjectQuery.getSession());
130
}
131

          
132
private String extractCacheKeyFromPrimaryKeyAndAbstractRecord(ObjectLevelReadQuery readObjectQuery, AbstractRecord row) {
133
    if (readObjectQuery != null && row != null) {
134
        return extractCacheKeyFromPrimaryKey(readObjectQuery, extractPrimaryKeyFromRow(readObjectQuery, row).toString());
135
    } else {
136
        return null;
137
    }
138
}
139

          
140
private String makeCacheKey(final Object pk, ClassDescriptor classDescriptor) {
141
    return classDescriptor.getJavaClass().getSimpleName() + "-" + pk.toString();
142
}


For cache invalidation, you just need to register your invalidators with DescriptorEventAdapter and implement postUpdate(), postDelete() and postInsert(). When EclipseLink write occurs, one of these methods would get executed and you can call your invalidators to delete/update cache entry in Redis. You should populate and invalidate cache asynchronously to avoid blocking the application.

We use Lettuce as a client library to talk to Redis and Kryo for serialization

Conclusion

Integrating EclipseLink with Redis is much easier if you cache data used to populate domain objects. Initially, we attempted to cache domain object but ran into several issues with EclipseLink. Domain object maintains associations with other objects. When you read object back from Redis, you’ll have to reconstruct the object tree. This gets complicated when there are lazy-load associations among objects. Data caching is simple and the cache entry size is consistent (i.e. one database row). Predictable cache entry size helps us optimize Redis cache size and make serialization and deserialization faster. You don’t have to change much of your existing code to get this working. This is really a game changer for us in regard to reducing our database load and providing consistent performance.

Database EclipseLink Redis (company) Cache (computing) Object (computer science) Data (computing) Open source

Opinions expressed by DZone contributors are their own.

Related

  • Redis Is Not Just a Cache
  • Implement Hibernate Second-Level Cache With NCache
  • Caching RESTful API Requests With Heroku Data for Redis
  • Simple Sophisticated Object Cache Service Using Azure Redis

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends:

OSZAR »