Skip to content

Instantly share code, notes, and snippets.

@wsky
Last active August 29, 2015 14:22
Show Gist options
  • Save wsky/25860029e4fb102c5b59 to your computer and use it in GitHub Desktop.
Save wsky/25860029e4fb102c5b59 to your computer and use it in GitHub Desktop.
java off-heap

http://www.infoq.com/articles/Open-JDK-and-HashMap-Off-Heap

https://github.com/OpenHFT/HugeCollections/blob/master/collections/src/test/java/net/openhft/collections/fromdocs/OpenJDKAndHashMapExamplesTest.java

   SharedHashMap<String, BondVOInterface> shm = new SharedHashMapBuilder()
                .generatedValueType(true)
                .entrySize(512).file(new File(TMP + "/shm-myBondPortfolioSHM")).kClass(String.class).vClass(BondVOInterface.class).create();


        BondVOInterface bondVO = DataValueClasses.newDirectReference(BondVOInterface.class);
        shm.acquireUsing("369604103", bondVO);
        bondVO.setIssueDate(parseYYYYMMDD("20130915"));
        bondVO.setMaturityDate(parseYYYYMMDD("20140915"));
        bondVO.setCoupon(5.0 / 100); // 5.0%

        BondVOInterface.MarketPx mpx930 = bondVO.getMarketPxIntraDayHistoryAt(0);
        mpx930.setAskPx(109.2);
        mpx930.setBidPx(106.9);

        BondVOInterface.MarketPx mpx1030 = bondVO.getMarketPxIntraDayHistoryAt(1);
        mpx1030.setAskPx(109.7);
        mpx1030.setBidPx(107.6);


        SharedHashMap<String, BondVOInterface> shmB = new SharedHashMapBuilder()
                .generatedValueType(true)
                .entrySize(320).file(new File(TMP + "/shm-myBondPortfolioSHM")).kClass(String.class).vClass(BondVOInterface.class).create();

        // ZERO Copy but creates a new off heap reference each time


        BondVOInterface bondVOB = shmB.get("369604103");
        assertEquals(5.0 / 100, bondVOB.getCoupon(), 0.0);

        BondVOInterface.MarketPx mpx930B = bondVOB.getMarketPxIntraDayHistoryAt(0);
        assertEquals(109.2, mpx930B.getAskPx(), 0.0);
        assertEquals(106.9, mpx930B.getBidPx(), 0.0);

        BondVOInterface.MarketPx mpx1030B = bondVOB.getMarketPxIntraDayHistoryAt(1);
        assertEquals(109.7, mpx1030B.getAskPx(), 0.0);
        assertEquals(107.6, mpx1030B.getBidPx(), 0.0);


        //ZERO-COPY
        // our reusable, mutable off heap reference, generated from the interface.
        BondVOInterface bondZC = DataValueClasses.newDirectReference(BondVOInterface.class);

        // lookup the key and give me a reference to the data.
        if (shm.getUsing("369604103", bondZC) != null) {
            // found a key and bondZC has been set
            // get directly without touching the rest of the record.
            long _matDate = bondZC.getMaturityDate();
            // write just this field, again we need to assume we are the only writer.
            bondZC.setMaturityDate(parseYYYYMMDD("20440315"));

            //demo of how to do OpenHFT off-heap array[ ] processing
            int tradingHour = 2;  //current trading hour intra-day
            BondVOInterface.MarketPx mktPx = bondZC.getMarketPxIntraDayHistoryAt(tradingHour);
            if (mktPx.getCallPx() < 103.50) {
                mktPx.setParPx(100.50);
                mktPx.setAskPx(102.00);
                mktPx.setBidPx(99.00);
                // setMarketPxIntraDayHistoryAt is not needed as we are using zero copy,
                // the original has been changed.
            }
        }

        // bondZC will be full of default values and zero length string the first time.

        // from this point, all operations are completely record/entry local,
        // no other resource is involved.
        // now perform thread safe operations on my reference
        bondZC.addAtomicMaturityDate(16 * 24 * 3600 * 1000L);  //20440331


        bondZC.addAtomicCoupon(-1 * bondZC.getCoupon()); //MT-safe! now a Zero Coupon Bond.

        // say I need to do something more complicated
        // set the Threads getId() to match the process id of the thread.
        AffinitySupport.setThreadId();

        bondZC.busyLockEntry();
        try {
            String str = bondZC.getSymbol();
            if (str.equals("IBM_HY_2044"))
                bondZC.setSymbol("OPENHFT_IG_2044");
        } finally {
            bondZC.unlockEntry();
        }

        // cleanup.
        shm.close();
        shmB.close();
        new File("/dev/shm/myBondPortfolioSHM").delete();

https://github.com/OpenHFT/HugeCollections

https://github.com/OpenHFT/Chronicle-Map

A low latency replicated Key Value Store across your network, with eventual consistency, persistence and performance.

http://www.javacodegeeks.com/2014/05/chronicle-and-low-latency-in-java.html

I was watching this excellent presentation by Rolan Kuhn of Typesafe on Introducing Reactive Streams At first glance it appears that it has some similar goals to Chronicle, but as you dig into the details it was clear to me that there was a few key assumptions which were fundamentally different.

https://github.com/OpenHFT/Koloboke

When to use Chronicle Map or Koloboke Map

We suggest you use Chronicle Map if you need to :

store more than half a billion entries. distribute the map between processes. use off-heap memory, because your keys/values take too much memory and the Jvm suffers from GC Koloboke is ideal when you don’t have to share data between processes and you have less than half a billion entries.

Koloboke is designed for collections of primitives like List, Set, and Map. Chronicle Map is designed for a Map of data value types.

http://chronicle.software/products/koloboke-collections/

Supports a write everything model.

It is common knowledge that if you leave DEBUG level logging on, it can slow down your application dramatically. There is a tension between recording everything you might want to know later, and the impact on your application. Chronicle is designed to be fast enough that you can record everything. If you replace queues and IPC connections in your system, it can improve the performance and you get “record everything” for free, or even better. Being able to record everything means you can add trace timings through every stage of your system and then monitor your system, but also drill into the worst 1% delays in your system. This is not something you can do with a profiler which gives you averages. With chronicle you can answer questions such as; which parts of the system were responsible for the slowest 20 events for a day?

Chronicle has minimal interaction with the Operating System.

System calls are slow, and if you can avoid call the OS, you can save significant amounts of latency. For example, if you send a message over TCP on loopback, this can add a 10 micro-seconds latency between writing and reading the data. You can write to a chronicle, which is a plain write to memory, and read from chronicle, which is also a read from memory with a latency of 0.2 micro-seconds. (And as I mentioned before, you get persistence as well)

http://chronicle.software/products/chronicle-logger/

Chronicle logger supports most of the standard logging API’s including slf4j, sun logging, commons logging, log4j.

https://github.com/OpenHFT/Chronicle-Map#multiple-processes-on-the-same-server-with-replication

!()[https://camo.githubusercontent.com/1e18d134be0148150ca82ba1d288ac4cca9f7a1c/687474703a2f2f6368726f6e69636c652e736f6674776172652f77702d636f6e74656e742f75706c6f6164732f323031342f30372f4368726f6e69636c652d4d61702d72656d6f74652d73746174656c6573732d6d61705f30345f76422e6a7067]

https://github.com/OpenHFT/Chronicle-Queue
http://vanillajava.blogspot.com/2014/10/kafra-benchmark-on-chronicle-queue.html
http://chronicle.software/products/chronicle-queue/
```
Chronicle Queue offers:
A low latency, durable, interprocess communication (IPC)
No data loss; stores every value instead of replacing them
No GC as everything is done off heap
Persistence to disk through memory mapped files
IPC between Java processes or between threads in a Java process
Simple API for ease of use
The use of Java as a low latency programming language
The replay-ability of all your inputs and outputs,
concurrent writers across processes on the same machine.
concurrent readers across machines on your network using TCP replication.
embedded performance in all processes.
insulation of your producer from a slow consumer. Your consumer can be tera-bytes behind your producer.
all data stored off heap, to reduce GC impact.
data is written synchronously so that if your application fails on the next line, no data is lost.
data can be read less than a micro-second after it has been written.
```
@wsky
Copy link
Author

wsky commented May 29, 2015

Moving to Chronicle will not only give you the huge performance benefits but it reduces your budget requirements and frees up compute resource to add functionality.

@wsky
Copy link
Author

wsky commented May 29, 2015

@wsky
Copy link
Author

wsky commented May 29, 2015

@wsky
Copy link
Author

wsky commented May 29, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment