Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
   /*-
    * See the file LICENSE for redistribution information.
    *
    * Copyright (c) 2002, 2013 Oracle and/or its affiliates.  All rights reserved.
    *
    */
   
   package com.sleepycat.je.tree;
   
  import java.util.Arrays;
  
An IN represents an Internal Node in the JE tree. Explanation of KD (KnownDeleted) and PD (PendingDelete) entry flags =================================================================== PD: set for all LN entries that are deleted, even before the LN is committed. Is used as an authoritative (transactionally correct) indication that an LN is deleted. PD will be cleared if the txn for the deleted LN is aborted. KD: set under special conditions for entries containing LNs which are known to be obsolete. Not used for entries in an active/uncommitted transaction. First notice that IN.fetchTarget will allow a FileNotFoundException when the PD or KD flag is set on the entry. And it will allow a NULL_LSN when the KD flag is set. KD was implemented first, and was originally used when the cleaner attempts to migrate an LN and discovers it is deleted (see Cleaner.migrateLN). We need KD because the INCompressor may not have run, and may not have compressed the BIN. There's the danger that we'll try to fetch that entry, and that the file was deleted by the cleaner. KD was used more recently when an unexpected exception occurs while logging an LN, after inserting the entry. Rather than delete the entry to clean up, we mark the entry KD so it won't cause a fetch error later. In this case the entry LSN is NULL_LSN. See Tree.insertNewSlot. PD is closely related to the first use of KD above (for cleaned deleted LNs) and came about because of a cleaner optimization we make. The cleaner considers all deleted LN log entries to be obsolete, without doing a tree lookup, and without any record of an obsolete offset. This makes the cost of cleaning of deleted LNs very low. For example, if the log looks like this: 100 LNA 200 delete of LNA then LSN 200 will be considered obsolete when this file is processed by the cleaner. After all, only two things can happen: (1) the txn commits, and we don't need LSN 200, because we can wipe this LN out of the tree, or (2) the txn aborts, and we don't need LSN 200, because we are going to revert to LSN 100/LNA. We set PD for the entry of a deleted LN at the time of the operation, and we clear PD if the transaction aborts. There is no real danger that this log entry will be processed by the cleaner before it's committed, because cleaning can only happen after the first active LSN. Just as in the first use of KD above, setting PD is necessary to avoid a fetch error, when the file is deleted by the cleaner but the entry containing the deleted LN has not been deleted by the INCompressor. PD is also set in replication rollback, when LNs are marked as invisible. When LSN locking was implemented (see CursorImpl.lockLN), the PD flag took on additional meaning. PD is used to determine whether an LN is deleted without fetching it, and therefore is relied on to be transactionally correct. In addition to the setting and use of the KD/PD flags described above, the situation is complicated by the fact that we must restore the state of these flags during abort, recovery, and set them properly during slot reuse. We have been meaning to consider whether PD and KD can be consolidated into one flag: simply the Deleted flag. The Deleted flag would be set in the same way as PD is currently set, as well as the second use of KD described above (when the LSN is NULL_LSN after an insertion error). The use of KD and PD for invisible entries and recovery rollback should also be considered. If we consolidate the two flags and set the Deleted flag during a delete operation (like PD), we'll have to remove optimizations (in CursorImpl for example) that consider a slot deleted when KD is set. Since KD is rarely set currently, this shouldn't have a noticeable performance impact.
 
 public class IN extends Node implements Comparable<IN>, Loggable {
 
     private static final String BEGIN_TAG = "<in>";
     private static final String END_TAG = "</in>";
     private static final String TRACE_SPLIT = "Split:";
     private static final String TRACE_DELETE = "Delete:";
 
     private static final byte KNOWN_DELETED_BIT = 0x1;
     private static final byte CLEAR_KNOWN_DELETED_BIT = ~0x1;
     private static final byte DIRTY_BIT = 0x2;
     private static final byte CLEAR_DIRTY_BIT = ~0x2;
     private static final byte MIGRATE_BIT = 0x4;
     private static final byte CLEAR_MIGRATE_BIT = ~0x4;
     private static final byte PENDING_DELETED_BIT = 0x8;
     private static final byte CLEAR_PENDING_DELETED_BIT = ~0x8;
 
     private static final int BYTES_PER_LSN_ENTRY = 4;
     private static final int MAX_FILE_OFFSET = 0xfffffe;
     private static final int THREE_BYTE_NEGATIVE_ONE = 0xffffff;
 
     /*
      * Levels:
      * The mapping tree has levels in the 0x20000 -> 0x2ffff number space.
      * The main tree has levels in the 0x10000 -> 0x1ffff number space.
      * The duplicate tree levels are in 0-> 0xffff number space.
      */
     public static final int DBMAP_LEVEL = 0x20000;
     public static final int MAIN_LEVEL = 0x10000;
     public static final int LEVEL_MASK = 0x0ffff;
     public static final int MIN_LEVEL = -1;
     public static final int MAX_LEVEL = .;
     public static final int BIN_LEVEL =  | 1;
 
     /*
      * IN eviction types returned by getEvictionType.
      */
     public static final int MAY_NOT_EVICT = 0;
     public static final int MAY_EVICT_LNS = 1;
     public static final int MAY_EVICT_NODE = 2;
 
     private static final int IN_DIRTY_BIT = 0x1;
     private static final int IN_RECALC_TOGGLE_BIT = 0x2;
     private static final int IN_IS_ROOT_BIT = 0x4;
     private int flags// not persistent
 
     /* The unique id of this node. */
     private long nodeId;
 
     protected SharedLatch latch;
     private long generation;
     private int nEntries;
     private byte[] identifierKey;
 
     /*
      * The children of this IN. Only the ones that are actually in the cache
      * have non-null entries. Specialized sparse array represents are used to
      * represent the entries. The representation can mutate as modifications
      * are made to it.
      */
     private INTargetRep entryTargets;
 
     /*
      * entryKeyVals contains the keys in their entirety if key prefixing is not
      * being used. If prefixing is enabled, then keyPrefix contains the prefix
      * and entryKeyVals contains the suffixes.
      */
     private INKeyRep entryKeyVals;
     private byte[] keyPrefix;
 
     /*
      * The following entryLsnXXX fields are used for storing LSNs.  There are
      * two possible representations: a byte array based rep, and a long array
      * based one.  For compactness, the byte array rep is used initially.  A
      * single byte[] that uses four bytes per LSN is used. The baseFileNumber
      * field contains the lowest file number of any LSN in the array.  Then for
      * each entry (four bytes each), the first byte contains the offset from
      * the baseFileNumber of that LSN's file number.  The remaining three bytes
      * contain the file offset portion of the LSN.  Three bytes will hold a
      * maximum offset of 16,777,214 (0xfffffe), so with the default JE log file
      * size of 10,000,000 bytes this works well.
      *
      * If either (1) the difference in file numbers exceeds 127
      * (Byte.MAX_VALUE) or (2) the file offset is greater than 16,777,214, then
      * the byte[] based rep mutates to a long[] based rep.
      *
      * In the byte[] rep, DbLsn.NULL_LSN is represented by setting the file
      * offset bytes for a given entry to -1 (0xffffff).
      */
     private long baseFileNumber;
     private byte[] entryLsnByteArray;
     private long[] entryLsnLongArray;
     private byte[] entryStates;
     private int level;
     private long inMemorySize;
 
     private boolean inListResident// true if this IN is on the IN list
 
     /* Location of last full version. */
     private long lastFullVersion = .;
 
     /*
      * A sequence of obsolete info that cannot be counted as obsolete until an
      * ancestor IN is logged non-provisionally.
      */
 
     /* See convertDupKeys. */
     private boolean needDupKeyConversion;
 
     /* Used to indicate that an exact match was found in findEntry. */
     public static final int EXACT_MATCH = (1 << 16);
 
     /* Used to indicate that an insert was successful. */
     public static final int INSERT_SUCCESS = (1 << 17);
 
     /*
      * accumluted memory budget delta.  Once this exceeds
      * MemoryBudget.ACCUMULATED_LIMIT we inform the MemoryBudget that a change
      * has occurred.  See SR 12273.
      */
     private int accumulatedDelta = 0;
 
     /*
      * Max allowable accumulation of memory budget changes before MemoryBudget
      * should be updated. This allows for consolidating multiple calls to
      * updateXXXMemoryBudget() into one call.  Not declared final so that the
      * unit tests can modify it.  See SR 12273.
      */
     public static int ACCUMULATED_LIMIT = 1000;

    
Create an empty IN, with no node ID, to be filled in from the log.
 
     public IN() {
         init(null., 0, 0);
     }

    
Create a new IN.
 
     public IN(DatabaseImpl dbImpl,
               byte[] identifierKey,
               int capacity,
               int level) {
 
          =
             dbImpl.getDbEnvironment().getNodeSequence().getNextLocalNodeId();
         init(dbImplidentifierKeycapacity,
              generateLevel(dbImpl.getId(), level));
         initMemorySize();
     }

    
For Sizeof, set all array fields to null, since they are not part of the fixed overhead.
 
     public IN(@SuppressWarnings("unused"SizeofMarker marker) {
          = null;
          = null;
          = null;
          = null;
          = null;
          = null;
     }

    
Initialize IN object.
 
     protected void init(DatabaseImpl db,
                         @SuppressWarnings("hiding")
                         byte[] identifierKey,
                         int initialCapacity,
                         @SuppressWarnings("hiding")
                         int level) {
         setDatabase(db);
          = new SharedLatch(shortClassName() + getNodeId());
         .setExclusiveOnly(EnvironmentImpl.getSharedLatches() ?
                 isAlwaysLatchedExclusively() :
                 true);
         assert .setNoteLatch(true);
          = 0;
          = 0;
          = 0;
         this. = identifierKey;
          = .;
          = new INKeyRep.Default(initialCapacity);
          = null;
          = -1;
          = new byte[initialCapacity << 2];
          = null;
          = new byte[initialCapacity];
         this. = level;
          = false;
     }

    
Initialize the per-node memory count by computing its memory usage.
 
     protected void initMemorySize() {
          = .compact(this);
          = computeMemorySize();
     }
 
     public long getNodeId() {
         return ;
     }
 
     /* For unit tests only. */
     void setNodeId(long nid) {
          = nid;
     }
 
     @Override
     public boolean equals(Object obj) {
         if (!(obj instanceof IN)) {
             return false;
         }
         IN in = (INobj;
         return (this.getNodeId() == in.getNodeId());
     }

    
We would like as even a hash distribution as possible so that the Evictor's LRU is as accurate as possible. ConcurrentHashMap takes the value returned by this method and runs its own hash algorithm on it. So a bit complement of the node ID is sufficent as the return value and is a little better than returning just the node ID. If we use a different container in the future that does not re-hash the return value, we should probably implement the Wang-Jenkins hash function here.
 
     @Override
     public int hashCode() {
         return (int) ~getNodeId();
     }

    
Sort based on equality key.
 
     public int compareTo(IN argIN) {
         long argNodeId = argIN.getNodeId();
         long myNodeId = getNodeId();
 
         if (myNodeId < argNodeId) {
             return -1;
         } else if (myNodeId > argNodeId) {
             return 1;
         } else {
             return 0;
         }
     }

    
Create a new IN. Need this because we can't call newInstance() without getting a 0 for nodeId.
 
     protected IN createNewInstance(byte[] identifierKey,
                                    int maxEntries,
                                    int level) {
         return new IN(identifierKeymaxEntrieslevel);
     }
 
     /*
      * Return whether the shared latch for this kind of node should be of the
      * "always exclusive" variety.  Presently, only IN's are actually latched
      * shared.  BINs are latched exclusive only.
      */
     boolean isAlwaysLatchedExclusively() {
         return false;
     }

    
Initialize a node that has been read in from the log.
 
     @Override
     public void postFetchInit(DatabaseImpl dblong lastLoggedLsn) {
         commonPostFetchInit(dblastLoggedLsn);
         convertDupKeys(); // must be after initMemorySize
         db.getDbEnvironment().getInMemoryINs().add(this);
     }

    
Initialize a node read in during recovery.
 
     public void postRecoveryInit(DatabaseImpl dblong lastLoggedLsn) {
         commonPostFetchInit(dblastLoggedLsn);
     }

    
Common actions of postFetchInit and postRecoveryInit.
 
     private void commonPostFetchInit(DatabaseImpl dblong lastLoggedLsn) {
         setDatabase(db);
         setLastLoggedLsn(lastLoggedLsn);
         initMemorySize(); // compute before adding to IN list
     }

    
Sets the last logged LSN, which for a BIN may be a delta. In this class, the last logged version and last full version are the same. In the BIN class, this method is overridden since they may be different.
 
     void setLastLoggedLsn(long lsn) {
         setLastFullLsn(lsn);
     }

    
Returns the last logged LSN, or NULL_LSN if never logged. In this class, the last logged version and last full version are the same. In the BIN class, this method is overridden since they may be different.
 
     public long getLastLoggedVersion() {
         return getLastFullVersion();
     }

    
Sets the last full version LSN.
 
     void setLastFullLsn(long lsn) {
          = lsn;
     }

    
Returns the last full version LSN, or NULL_LSN if never logged. Public for unit testing.
 
     public long getLastFullVersion() {
         return ;
     }

    
Returns the last delta version LSN, or NULL_LSN if a delta was not last logged. Public for unit testing.
 
     public long getLastDeltaVersion() {
         return .;
     }

    
Latch this node exclusive, optionally setting the generation.
 
     public void latch(CacheMode cacheMode)
         throws DatabaseException {
 
         setGeneration(cacheMode);
         .acquireExclusive();
     }

    
Latch this node shared, optionally setting the generation.
 
     @Override
     public void latchShared(CacheMode cacheMode)
         throws DatabaseException {
 
         setGeneration(cacheMode);
         .acquireShared();
     }

    
Latch this node if it is not latched by another thread, optionally setting the generation if the latch succeeds.
 
     public boolean latchNoWait(CacheMode cacheMode)
         throws DatabaseException {
 
         if (.acquireExclusiveNoWait()) {
             setGeneration(cacheMode);
             return true;
         } else {
             return false;
         }
     }

    
Latch this node exclusive and set the generation.
 
     public void latch()
         throws DatabaseException {
 
         latch(.);
     }

    
Latch this node shared and set the generation.
 
     @Override
     public void latchShared()
         throws DatabaseException {
 
         latchShared(.);
     }

    
Latch this node if it is not latched by another thread, and set the generation if the latch succeeds.
 
     public boolean latchNoWait()
         throws DatabaseException {
 
         return latchNoWait(.);
     }
 
     public String getLatchString() { return .toString(); }

    
Release the latch on this node.
 
     @Override
     public void releaseLatch() {
         .release();
     }

    
Release the latch on this node.
 
     public void releaseLatchIfOwner() {
         .releaseIfOwner();
     }

    

Returns:
true if this thread holds the IN's latch
 
     public boolean isLatchOwnerForRead() {
         return .isOwner();
     }
 
     public boolean isLatchOwnerForWrite() {
         return .isWriteLockedByCurrentThread();
     }
 
     /* For unit testing. */
     public int getLatchQueueLength() {
         return .getQueueLength();
     }
 
     public long getGeneration() {
         return ;
     }
 
     public void setGeneration(CacheMode cacheMode) {
         switch (cacheMode) {
         case :
              = Generation.getNextGeneration();
             break;
         case :
             break;
         case :
              = .;
             break;
         case :
         case :
         case :
             if (isBIN()) {
                  = 0L;
             }
             break;
         default:
             throw EnvironmentFailureException.unexpectedState
                 ("unknown cacheMode: " + cacheMode);
         }
     }
 
     public void setGeneration(long newGeneration) {
          = newGeneration;
     }
 
     @Override
     public int getLevel() {
         return ;
     }
 
     private static int generateLevel(DatabaseId dbIdint newLevel) {
         if (dbId.equals(.)) {
             return newLevel | ;
         } else {
             return newLevel | ;
         }
     }
 
     /* This has default protection for access by the unit tests. */
     void setKeyPrefix(byte[] keyPrefix) {
         assert  != null;
         int prevLength = (this. == null) ? 0 : this..length;
         this. = keyPrefix;
         /* Update the memory budgeting to reflect changes in the key prefix. */
         int currLength = (keyPrefix == null) ? 0 : keyPrefix.length;
         updateMemorySize(prevLengthcurrLength);
     }
 
     byte[] getKeyPrefix() {
         return ;
     }
 
     public boolean getDirty() {
         return ( & ) != 0;
     }
 
     /* public for unit tests */
     public void setDirty(boolean dirty) {
         if (dirty) {
              |= ;
         } else {
              &= ~;
         }
     }
 
     public boolean getRecalcToggle() {
         return ( & ) != 0;
     }
 
     public void setRecalcToggle(boolean toggle) {
         if (toggle) {
              |= ;
         } else {
              &= ~;
         }
     }
 
     public boolean isRoot() {
         return ( & ) != 0;
     }
 
     public boolean isDbRoot() {
         return ( & ) != 0;
     }
 
     void setIsRoot(boolean isRoot) {
         setIsRootFlag(isRoot);
         setDirty(true);
     }
 
     private void setIsRootFlag(boolean isRoot) {
         if (isRoot) {
              |= ;
         } else {
              &= ~;
         }
     }

    

Returns:
the identifier key for this node.
 
     public byte[] getIdentifierKey() {
         return ;
     }

    
Set the identifier key for this node.

Parameters:
key - the new identifier key for this node.
 
     public void setIdentifierKey(byte[] key) {
 
         /*
          * The identifierKey is "intentionally" not kept track of in the
          * memory budget.  If we did, then it would look like this:
 
          int oldIDKeySz = (identifierKey == null) ?
                            0 :
                            MemoryBudget.byteArraySize(identifierKey.length);
 
          int newIDKeySz = (key == null) ?
                            0 :
                            MemoryBudget.byteArraySize(key.length);
          changeMemorySize(newIDKeySz - oldIDKeySz);
 
          */
          = key;
         setDirty(true);
     }

    
Get the database for this IN.
 
     public DatabaseImpl getDatabase() {
         return ;
     }

    
Set the database reference for this node.
 
     public void setDatabase(DatabaseImpl db) {
          = db;
     }

    
Needed only during duplicates conversion, not during normal operation. The needDupKeyConversion field will only be true when first upgrading from JE 4.1. After the first time an IN is converted, it will be written out in a later file format, so the needDupKeyConversion field will be false and this method will do nothing. See DupConvert.convertInKeys.
 
     private void convertDupKeys() {
         /* Do not convert more than once. */
         if (!) {
             return;
         }
          = false;
         DupConvert.convertInKeys(this);
     }
 
     /*
      * Get the database id for this node.
      */
     public DatabaseId getDatabaseId() {
         return .getId();
     }
 
     void copyEntries(final int fromfinal int tofinal int n) {
          = .copy(fromtonthis);
          = .copy(fromtonthis);
         System.arraycopy(fromton);
         if ( == null) {
             final int fromOff = from << 2;
             final int toOff = to << 2;
             final int nBytes = n << 2;
             System.arraycopy(fromOff,
                              toOffnBytes);
         } else {
             System.arraycopy(from,
                              to,
                              n);
         }
     }
 
     void clearEntry(int idx) {
          = .set(idxnullthis);
          = .set(idxnullthis);
         setLsnElement(idx.);
         [idx] = 0;
     }

    
Return the idx'th key. If prefixing is enabled, construct a new byte[] containing the prefix and suffix. If prefixing is not enabled, just return the current byte[] in entryKeyVals[].
 
     public byte[] getKey(int idx) {
         if ( != null) {
             int prefixLen = .;
             byte[] suffix = .get(idx);
             if (prefixLen == 0) {
                 return suffix;
             }
             int suffixLen = (suffix == null ? 0 : suffix.length);
             byte[] ret = new byte[prefixLen + suffixLen];
             if ( != null) {
                 System.arraycopy(, 0, ret, 0, prefixLen);
             }
 
             if (suffix != null) {
                 System.arraycopy(suffix, 0, retprefixLensuffixLen);
             }
             return ret;
         }
         return .get(idx);
     }

    
Set the idx'th key.

Returns:
true if a multi-slot change was made and the complete IN memory size must be updated.
 
     private boolean setKeyAndDirty(int idxbyte[] keyVal) {
         [idx] |= ;
         return setKeyAndPrefix(idxkeyVal);
     }
 
     /*
      * Set the key at idx and adjust the key prefix if necessary.
      *
      * @return true if a multi-slot change was made and the complete IN memory
      * size must be updated.
      */
     private boolean setKeyAndPrefix(int idxbyte[] keyVal) {
 
         /*
          * Only compute key prefix if prefixing is enabled and there's an
          * existing prefix.
          */
         assert  != null;
         if (.getKeyPrefixing() &&  != null) {
             if (!compareToKeyPrefix(keyVal)) {
 
                 /*
                  * The new key doesn't share the current prefix, so recompute
                  * the prefix and readjust all the existing suffixes.
                  */
                 byte[] newPrefix = computeKeyPrefix(idx);
                 if (newPrefix != null) {
                     /* Take the new key into consideration for new prefix. */
                     newPrefix = Key.createKeyPrefix(newPrefixkeyVal);
                 }
                 recalcSuffixes(newPrefixkeyValidx);
                 return true;
             }
             final INKeyRep.Type prevRepType = .getType();
              = .set
                 (idxcomputeKeySuffix(keyVal), this);
             return prevRepType != .getType();
         }
 
         if ( != null) {
 
             /*
              * Key prefixing has been turned off on this database, but there
              * are existing prefixes. Remove prefixes for this IN.
              */
             recalcSuffixes(new byte[0], keyValidx);
             return true;
         } else {
             final INKeyRep.Type oldRepType = .getType();
              = .set(idxkeyValthis);
             return oldRepType != .getType();
         }
     }

    
Forces computation of the key prefix, without requiring a split. Is public for use by DbCacheSize.
 
     public void recalcKeyPrefix() {
         recalcSuffixes(computeKeyPrefix(-1), null, -1);
     }
 
     /*
      * Iterate over all keys in this IN and recalculate their suffixes based on
      * newPrefix.  If keyVal and idx are supplied, it means that entry[idx] is
      * about to be changed to keyVal so use that instead of
      * entryKeyVals.get(idx) when computing the new suffixes. If idx is < 0,
      * and keyVal is null, then recalculate suffixes for all entries in this.
      */
     private void recalcSuffixes(byte[] newPrefixbyte[] keyValint idx) {
         for (int i = 0; i < i++) {
             byte[] curKey = (i == idx ? keyVal : getKey(i));
              =
                 .set(icomputeKeySuffix(newPrefixcurKey), this);
         }
         setKeyPrefix(newPrefix);
     }
 
     /*
      * Returns whether the given newKey is a prefix of, or equal to, the
      * current keyPrefix.
      *
      * This has default protection for the unit tests.
      */
     boolean compareToKeyPrefix(byte[] newKey) {
         if ( == null ||
             . == 0) {
             return false;
         }
 
         int newKeyLen = newKey.length;
         for (int i = 0; i < .i++) {
             if (i < newKeyLen &&
                 [i] == newKey[i]) {
                 continue;
             } else {
                 return false;
             }
         }
 
         return true;
     }
 
     /*
      * Computes a key prefix based on all the keys in 'this'.  Return null if
      * the IN is empty or prefixing is not enabled or there is no common
      * prefix for the keys.
      */
     private byte[] computeKeyPrefix(int excludeIdx) {
         if (!.getKeyPrefixing() ||
              <= 1) {
             return null;
         }
 
         int firstIdx = (excludeIdx == 0) ? 1 : 0;
         byte[] curPrefixKey = getKey(firstIdx);
         int prefixLen = curPrefixKey.length;
 
         /*
          * Only need to look at first and last entries when keys are ordered
          * byte-by-byte.  But when there is a comparator, keys are not
          * necessarily ordered byte-by-byte.  [#21328]
          */
         boolean byteOrdered;
         if (true) {
             /* Disable optimization for now.  Needs testing. */
             byteOrdered = false;
         } else {
             byteOrdered = (.getKeyComparator() == null);
         }
         if (byteOrdered) {
             int lastIdx =  - 1;
             if (lastIdx == excludeIdx) {
                 lastIdx -= 1;
             }
             if (lastIdx > firstIdx) {
                 byte[] lastKey = getKey(lastIdx);
                 int newPrefixLen = Key.getKeyPrefixLength
                     (curPrefixKeyprefixLenlastKey);
                 if (newPrefixLen < prefixLen) {
                     curPrefixKey = lastKey;
                     prefixLen = newPrefixLen;
                 }
             }
         } else {
             for (int i = firstIdx + 1; i < i++) {
                 if (i == excludeIdx) {
                     continue;
                 }
                 byte[] curKey = getKey(i);
                 int newPrefixLen = Key.getKeyPrefixLength
                     (curPrefixKeyprefixLencurKey);
                 if (newPrefixLen < prefixLen) {
                     curPrefixKey = curKey;
                     prefixLen = newPrefixLen;
                 }
             }
         }
 
         byte[] ret = new byte[prefixLen];
         System.arraycopy(curPrefixKey, 0, ret, 0, prefixLen);
 
         return ret;
     }
 
     /*
      * Given a prefix and a key, return the suffix portion of keyVal.
      */
     private byte[] computeKeySuffix(byte[] newPrefixbyte[] keyVal) {
         int prefixLen = (newPrefix == null ? 0 : newPrefix.length);
 
         if (prefixLen == 0) {
             return keyVal;
         }
 
         int suffixLen = keyVal.length - prefixLen;
         byte[] ret = new byte[suffixLen];
         System.arraycopy(keyValprefixLenret, 0, suffixLen);
         return ret;
     }
 
     /*
      * For debugging.
      */
     boolean verifyKeyPrefix() {
         byte[] computedKeyPrefix = computeKeyPrefix(-1);
         if ( == null) {
             return computedKeyPrefix == null;
         }
 
         if (computedKeyPrefix == null ||
             computedKeyPrefix.length < .) {
             ..println("VerifyKeyPrefix failed");
             ..println(dumpString(0, false));
             return false;
         }
         for (int i = 0; i < .i++) {
             if ([i] != computedKeyPrefix[i]) {
                 ..println("VerifyKeyPrefix failed");
                 ..println(dumpString(0, false));
                 return false;
             }
         }
         return true;
     }

    
Get the idx'th migrate status.
    public boolean getMigrate(int idx) {
        return ([idx] & ) != 0;
    }

    
Set the idx'th migrate status.
    public void setMigrate(int idxboolean migrate) {
        if (migrate) {
            [idx] |= ;
        } else {
            [idx] &= ;
        }
    }
    public byte getState(int idx) {
        return [idx];
    }

    
Return the idx'th target.
    public Node getTarget(int idx) {
        return .get(idx);
    }

    
Sets the idx'th target. No need to make dirty, that state only applies to key and LSN.

WARNING: This method does not update the memory budget. The caller must update the budget.

    void setTarget(int idxNode target) {
        assert isLatchOwnerForWrite() :
            "Not latched for write " + getClass().getName() +
            " id=" + getNodeId();
         = .set(idxtargetthis);
    }

    
Return the idx'th LSN for this entry.

Returns:
the idx'th LSN for this entry.
    public long getLsn(int idx) {
        if ( == null) {
            int offset = idx << 2;
            int fileOffset = getFileOffset(offset);
            if (fileOffset == -1) {
                return .;
            } else {
                return DbLsn.makeLsn(( +
                                      getFileNumberOffset(offset)),
                                     fileOffset);
            }
        } else {
            return [idx];
        }
    }

    
Sets the idx'th target LSN. Make this a private helper method, so we're sure to set the IN dirty where appropriate. Should only be called by setLsn and prepareForSlotReuse methods.
    private void setLsnNoValidation(int idxlong lsn) {
        int oldSize = computeLsnOverhead();
        /* setLsnElement can mutate to an array of longs. */
        setLsnElement(idxlsn);
        changeMemorySize(computeLsnOverhead() - oldSize);
        [idx] |= ;
    }
    /* For unit tests. */
    long[] getEntryLsnLongArray() {
        return ;
    }
    /* For unit tests. */
    byte[] getEntryLsnByteArray() {
        return ;
    }
    /* For unit tests. */
    void initEntryLsn(int capacity) {
         = null;
         = new byte[capacity << 2];
         = -1;
    }
    /* Use default protection for unit tests. */
    void setLsnElement(int idxlong value) {
        int offset = idx << 2;
        /* Will implement this in the future. Note, don't adjust if mutating.*/
        //maybeAdjustCapacity(offset);
        if ( != null) {
            [idx] = value;
            return;
        }
        if (value == .) {
            setFileNumberOffset(offset, (byte) 0);
            setFileOffset(offset, -1);
            return;
        }
        long thisFileNumber = DbLsn.getFileNumber(value);
        if ( == -1) {
            /* First entry. */
             = thisFileNumber;
            setFileNumberOffset(offset, (byte) 0);
        } else {
            if (thisFileNumber < ) {
                if (!adjustFileNumbers(thisFileNumber)) {
                    mutateToLongArray(idxvalue);
                    return;
                }
                 = thisFileNumber;
            }
            long fileNumberDifference = thisFileNumber - ;
            if (fileNumberDifference > .) {
                mutateToLongArray(idxvalue);
                return;
            }
            setFileNumberOffset
                (offset, (byte) (thisFileNumber - ));
            //assert getFileNumberOffset(offset) >= 0;
        }
        int fileOffset = (int) DbLsn.getFileOffset(value);
        if (fileOffset > ) {
            mutateToLongArray(idxvalue);
            return;
        }
        setFileOffset(offsetfileOffset);
        //assert getLsn(offset) == value;
    }

    
Mutates the compact LSN representation to the expanded representation. Used by DbCacheSize.
    public void clearLsnCompaction() {
        if ( != null) {
            mutateToLongArray(0, getLsn(0));
        }
    }
    private void mutateToLongArray(int idxlong value) {
        int nElts = . >> 2;
        long[] newArr = new long[nElts];
        for (int i = 0; i < nEltsi++) {
            newArr[i] = getLsn(i);
        }
        newArr[idx] = value;
         = newArr;
         = null;
    }
    /* Will implement this in the future. Note, don't adjust if mutating.*/
    
private void maybeAdjustCapacity(int offset) { if (entryLsnLongArray == null) { int bytesNeeded = offset + BYTES_PER_LSN_ENTRY; int currentBytes = entryLsnByteArray.length; if (currentBytes < bytesNeeded) { int newBytes = bytesNeeded + (GROWTH_INCREMENT * BYTES_PER_LSN_ENTRY); byte[] newArr = new byte[newBytes]; System.arraycopy(entryLsnByteArray, 0, newArr, 0, currentBytes); entryLsnByteArray = newArr; for (int i = currentBytes; i < newBytes; i += BYTES_PER_LSN_ENTRY) { setFileNumberOffset(i, (byte) 0); setFileOffset(i, -1); } } } else { int currentEntries = entryLsnLongArray.length; int idx = offset >> 2; if (currentEntries < idx + 1) { int newEntries = idx + GROWTH_INCREMENT; long[] newArr = new long[newEntries]; System.arraycopy(entryLsnLongArray, 0, newArr, 0, currentEntries); entryLsnLongArray = newArr; for (int i = currentEntries; i < newEntries; i++) { entryLsnLongArray[i] = DbLsn.NULL_LSN; } } } }
    private boolean adjustFileNumbers(long newBaseFileNumber) {
        long oldBaseFileNumber = ;
        for (int i = 0;
             i < .;
             i += ) {
            if (getFileOffset(i) == -1) {
                continue;
            }
            long curEntryFileNumber =
                oldBaseFileNumber + getFileNumberOffset(i);
            long newCurEntryFileNumberOffset =
                (curEntryFileNumber - newBaseFileNumber);
            if (newCurEntryFileNumberOffset > .) {
                long undoOffset = oldBaseFileNumber - newBaseFileNumber;
                for (int j = i - ;
                     j >= 0;
                     j -= ) {
                    if (getFileOffset(j) == -1) {
                        continue;
                    }
                    setFileNumberOffset
                        (j, (byte) (getFileNumberOffset(j) - undoOffset));
                    //assert getFileNumberOffset(j) >= 0;
                }
                return false;
            }
            setFileNumberOffset(i, (bytenewCurEntryFileNumberOffset);
            //assert getFileNumberOffset(i) >= 0;
        }
        return true;
    }
    private void setFileNumberOffset(int offsetbyte fileNumberOffset) {
        [offset] = fileNumberOffset;
    }
    private byte getFileNumberOffset(int offset) {
        return [offset];
    }
    private void setFileOffset(int offsetint fileOffset) {
        put3ByteInt(offset + 1, fileOffset);
    }
    private int getFileOffset(int offset) {
        return get3ByteInt(offset + 1);
    }
    private void put3ByteInt(int offsetint value) {
        [offset++] = (byte) (value >>> 0);
        [offset++] = (byte) (value >>> 8);
        [offset]   = (byte) (value >>> 16);
    }
    private int get3ByteInt(int offset) {
        int ret = ([offset++] & 0xFF) << 0;
        ret += ([offset++] & 0xFF) << 8;
        ret += ([offset]   & 0xFF) << 16;
        if (ret == ) {
            ret = -1;
        }
        return ret;
    }

    

Returns:
true if the idx'th entry has been deleted, although the transaction that performed the deletion may not be committed.
    public boolean isEntryPendingDeleted(int idx) {
        return (([idx] & ) != 0);
    }

    
Set pendingDeleted to true.
    public void setPendingDeleted(int idx) {
        [idx] |= ;
        [idx] |= ;
    }

    
Set pendingDeleted to false.
    public void clearPendingDeleted(int idx) {
        [idx] &= ;
        [idx] |= ;
    }

    

Returns:
true if the idx'th entry is deleted for sure. If a transaction performed the deletion, it has been committed.
    public boolean isEntryKnownDeleted(int idx) {
        return (([idx] & ) != 0);
    }

    
Set knownDeleted to true.
    void setKnownDeleted(int idx) {
        [idx] |= ;
        [idx] |= ;
    }

    
Set knownDeleted to false.
    void clearKnownDeleted(int idx) {
        [idx] &= ;
        [idx] |= ;
    }

    

Returns:
true if the object is dirty.
    boolean isDirty(int idx) {
        return (([idx] & ) != 0);
    }

    

Returns:
the number of entries in this node.
    public int getNEntries() {
        return ;
    }
    /*
     * In the future we may want to move the following static methods to an
     * EntryState utility class and share all state bit twidling among IN,
     * ChildReference, and DeltaInfo.
     */

    
Returns true if the given state is known deleted.
    static boolean isStateKnownDeleted(byte state) {
        return ((state & ) != 0);
    }

    
Returns true if the given state is pending deleted.
    static boolean isStatePendingDeleted(byte state) {
        return ((state & ) != 0);
    }

    

Returns:
the maximum number of entries in this node.
    public int getMaxEntries() {
        return .;
    }

    
Variant of fetchTarget that is called while holding an exclusive latch and therefore does not throw RelatchRequiredException.
    public final Node fetchTargetWithExclusiveLatch(int idx)
        throws DatabaseException {
        try {
            return fetchTarget(idx);
        } catch (RelatchRequiredException e) {
            throw EnvironmentFailureException.unexpectedException(e);
        }
    }

    
Returns the target of the idx'th entry or null if a pendingDeleted or knownDeleted entry has been cleaned. Note that null can only be returned for a slot that could contain a deleted LN, not other node types and not a DupCountLN since DupCountLNs are never deleted. Null is also returned for a KnownDeleted slot with a NULL_LSN.

Returns:
the target node or null.
    public Node fetchTarget(int idx)
        throws RelatchRequiredExceptionDatabaseException {
        EnvironmentImpl envImpl = .getDbEnvironment();
        boolean isMiss = false;
        if (.get(idx) == null) {
            /* Fault object in from log. */
            long lsn = getLsn(idx);
            if (lsn == .) {
                if (!isEntryKnownDeleted(idx)) {
                    throw EnvironmentFailureException.unexpectedState
                        (makeFetchErrorMsg("NULL_LSN without KnownDeleted",
                                           thislsn[idx]));
                }
                /*
                 * Ignore a NULL_LSN (return null) if KnownDeleted is set.
                 * This is the remnant of an incomplete insertion -- see
                 * Tree.insert. [#13126] or a rollback.
                 */
            } else {
                if (!isLatchOwnerForWrite()) {
                    throw .;
                }
                try {
                    LogEntry logEntry =
                        envImpl.getLogManager().
                        getLogEntryAllowInvisibleAtRecovery(lsn);
                    /* Ensure keys are transactionally correct. [#15704] */
                    byte[] lnSlotKey = null;
                    if (logEntry instanceof LNLogEntry) {
                        LNLogEntry lnEntry = (LNLogEntry