Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
   /*-
    * See the file LICENSE for redistribution information.
    *
    * Copyright (c) 2002, 2013 Oracle and/or its affiliates.  All rights reserved.
    *
    */
   
   package com.sleepycat.je.cleaner;
   
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_BIN_DELTAS_CLEANED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_BIN_DELTAS_DEAD;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_BIN_DELTAS_MIGRATED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_BIN_DELTAS_OBSOLETE;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_CLUSTER_LNS_PROCESSED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_DELETIONS;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_ENTRIES_READ;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_INS_CLEANED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_INS_DEAD;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_INS_MIGRATED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_INS_OBSOLETE;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNQUEUE_HITS;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNS_CLEANED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNS_DEAD;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNS_LOCKED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNS_MARKED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNS_MIGRATED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LNS_OBSOLETE;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_LN_SIZE_CORRECTION_FACTOR;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_MARKED_LNS_PROCESSED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_PENDING_LNS_LOCKED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_PENDING_LNS_PROCESSED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_PROBE_RUNS;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_REPEAT_ITERATOR_READS;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_RUNS;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_TOTAL_LOG_SIZE;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.CLEANER_TO_BE_CLEANED_LNS_PROCESSED;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.GROUP_DESC;
  import static com.sleepycat.je.cleaner.CleanerStatDefinition.GROUP_NAME;
  
  import java.util.Arrays;
  import java.util.List;
  import java.util.Map;
  import java.util.Set;
  
The Cleaner is responsible for effectively garbage collecting the JE log. It selects the least utilized log file for cleaning (see FileSelector), reads through the log file (FileProcessor) and determines whether each entry is obsolete (no longer relavent) or active (referenced by the Btree). Entries that are active are migrated (copied) to the end of the log, and finally the cleaned file is deleted. The migration of active entries is a multi-step process that can be configured to operate in different ways. Eviction and checkpointing, as well as the cleaner threads (FileProcessor instances) are participants in this process. Migration may be immediate, lazy, or proactive. Active INs are always migrated lazily, which means that they are marked dirty by the FileProcessor, and then logged later by an eviction or checkpoint. Active LNs are either migrated lazily or immediately depending on the EnvironmentConfig.CLEANER_LAZY_MIGRATION setting. If they are migrated lazily, the migrate flag is set in the LN slot by the FileProcessor, and the LN is logged later by an eviction or checkpoint. When the FileProcessor is finished with a file, all lazy migration for that file is normally completed by the end of the next checkpoint, if not sooner via eviction. The checkpoint/recovery mechanism will ensure that obsolete entries will not be referenced by the Btree. At the end of the checkpoint, it is therefore safe to delete the log file. There is one exception to the above paragraph. When attempting to migrate an LN, if the LN cannot be locked then we must retry the migration at a later time. Also, if a database removal is in progress, we consider all entries in the database obsolete but cannot delete the log file until database removal is complete. Such "pending" LNs and databases are queued and processed periodically during file processing and at the start of a checkpoint; see processPending(). In this case, we may have to wait for more than one checkpoint to occur before the log file can be deleted. See FileSelector and the use of the pendingLNs and pendingDBs collections. The last type of migration, proactive migration, is migration of LNs by the evictor or checkpointer for files that are to-be-cleaned, i.e., part of the cleaner's backlog. The idea is to prevent the backlog from growing too large (and potentially filling the disk) by doing more cleaner work during eviction, which will throttle the application threads. By default, proactive migration is disabled, but it can enabled using EnvironmentConfig.CLEANER_BACKGROUND_PROACTIVE_MIGRATION and CLEANER_FOREGROUND_PROACTIVE_MIGRATION.
 
 public class Cleaner implements DaemonRunner,
                                 EnvConfigObserver,
                                 ExceptionListenerUser {
     /* From cleaner */
     static final String CLEAN_IN = "CleanIN:";
     static final String CLEAN_LN = "CleanLN:";
     static final String CLEAN_MIGRATE_LN = "CleanMigrateLN:";
     static final String CLEAN_PENDING_LN = "CleanPendingLN:";

    
Whether to update the IN generation count during searches. This is currently disabled because 1) we update the generation of the BIN when we set a MIGRATE flag and 2) if the BIN is not evicted its parents will not be, so not updating the generation during the search has no benefit. By not updating the generation during searches for which we do NOT set the MIGRATE flag, we avoid holding INs in the cache that are not needed for lazy migration. However, we do very few searches for obsolete LNs because the obsolete tracking info prevents this, so the benefit of not updating the generation during searches is questionable. In other words, changing this setting will have little effect.
 
     static final CacheMode UPDATE_GENERATION = .;

    
Whether the cleaner should participate in critical eviction. Ideally the cleaner would not participate in eviction, since that would reduce the cost of cleaning. However, the cleaner can add large numbers of nodes to the cache. By not participating in eviction, other threads could be kept in a constant state of eviction and would effectively starve. Therefore, this setting is currently enabled.
 
     static final boolean DO_CRITICAL_EVICTION = true;
 
     /*
      * Constants used by checkBacklogGrowth.  These settings are not
      * configurable externally because our internal backlog data will probably
      * be removed or changed in the future, and we'll have to use a different
      * approach for determining whether the cleaner is making progress.
      */
     /* Number of backlogs counted in the trailing average. */
     static final int BACKLOG_ALERT_COUNT = 5;
     /* Smallest backlog value that will generate an alert. */
     static final int BACKLOG_ALERT_FLOOR = 5;
 
     /*
      * List of recent backlog values. Maximum size is BACKLOG_ALERT_COUNT.  See
      * checkBacklogGrowth.
      */
     private final LinkedList<IntegerbacklogAlertList =
         new LinkedList<Integer>();
 
     /*
      * Cumulative counters.  Updates to these counters occur in multiple
      * threads, including FileProcessor threads,  and are not synchronized.
      * This could produce errors in counting, but avoids contention around stat
      * updates.
      */
     StatGroup stats;
     LongStat nINsDead;
     LongStat nLNsDead;
 
     /*
      * Configuration parameters are non-private for use by FileProcessor,
      * UtilizationTracker, or UtilizationCalculator.
      */
     long lockTimeout;
     int readBufferSize;
     int lookAheadCacheSize;
     long nDeadlockRetries;
     boolean expunge;
     boolean clusterResident;
     boolean clusterAll;
     int maxBatchFiles;
     long cleanerBytesInterval;
     boolean trackDetail;
     boolean fetchObsoleteSize;
     boolean lazyMigration;
     int dbCacheClearCount;
     private boolean foregroundProactiveMigration;
     private boolean backgroundProactiveMigration;
     private final boolean rmwFixEnabled;
     int minUtilization;
     int minFileUtilization;
     int minAge;

    
All files that are to-be-cleaned. Used to perform proactive migration. Is read-only after assignment, so no synchronization is needed.
 
     private Set<LongtoBeCleanedFiles = Collections.emptySet();

    
All files that are below the minUtilization threshold. Used to perform clustering migration. Is read-only after assignment, so no synchronization is needed.
 
     private Set<LonglowUtilizationFiles = Collections.emptySet();
 
     private final String name;
     private final EnvironmentImpl env;
     private final UtilizationProfile profile;
     private final UtilizationTracker tracker;
     private final UtilizationCalculator calculator;
     private final FileSelector fileSelector;
     private FileProcessor[] threads;
 
     /*
      * Log file deletion must check for ongoing backups and other procedures
      * that rely on a set log files remaining stable (no deletions).  Multiple
      * ranges of file numbers may be protected from deletion, where each range
      * is from a given file number to the end of the log.
      *
      * protectedFileRanges is a list that contains the starting file number for
      * each protected range.  All files from the mininum of these values to the
      * end of the log are protected from deletion.  This field is accessed only
      * while synchronizing on protectedFileRanges.
      */
     private final List<LongprotectedFileRanges;
     private final Logger logger;
     final AtomicLong totalRuns;
 
     /* See processPending. */
     private final AtomicBoolean processPendingReentrancyGuard =
         new AtomicBoolean(false);
 
     public Cleaner(EnvironmentImpl envString name)
         throws DatabaseException {
 
         this. = env;
         this. = name;
 
         /* Initiate the stats definitions. */
          = new StatGroup();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          = new LongStat();
          =
             new LongStat();
          =
             new LongStat();
          =
             new LongStat();
          = new LongStat();
          =
             new LongStat();
          = new LongStat();
          =
             new FloatStat();
 
          = new UtilizationTracker(envthis);
          = new UtilizationProfile(env);
          = new UtilizationCalculator(envthis);
          = new FileSelector();
          = new FileProcessor[0];
          = new LinkedList<Long>();
          = LoggerUtils.getLogger(getClass());
          = new AtomicLong(0);
 
         /*
          * The trackDetail property is immutable because of the complexity (if
          * it were mutable) in determining whether to update the memory budget
          * and perform eviction.
          */
          = env.getConfigManager().getBoolean
             (.);
 
          = env.getConfigManager().getBoolean
             (.);
 
         /* Initialize mutable properties and register for notifications. */
         envConfigUpdate(env.getConfigManager(), null);
         env.addConfigObserver(this);
         env.registerExceptionListenerUser(this);
     }

    
Process notifications of mutable property changes.

Throws:
java.lang.IllegalArgumentException via Environment ctor and setMutableConfig.
 
     public void envConfigUpdate(DbConfigManager cm,
                                 EnvironmentMutableConfig ignore)
         throws DatabaseException {
 
 
         if ( <= 0) {
              = cm.getInt
                 (.);
         }
 
          = cm.getInt
 
 
 
 
 
 
 
 
          = 
 
         if ( && ) {
             throw new IllegalArgumentException
                 ("Both " + . +
                  " and " + . +
                  " may not be set to true.");
         }
 
         int nThreads = cm.getInt(.);
         assert nThreads > 0;
 
         if (nThreads != .) {
 
             /* Shutdown threads when reducing their number. */
             for (int i = nThreadsi < .i += 1) {
                 if ([i] != null) {
                     [i].shutdown();
                     [i] = null;
                 }
             }
 
             /* Copy existing threads that are still used. */
             FileProcessor[] newThreads = new FileProcessor[nThreads];
             for (int i = 0; i < nThreads && i < .i += 1) {
                 newThreads[i] = [i];
             }
 
             /* Don't lose track of new threads if an exception occurs. */
              = newThreads;
 
             /* Start new threads when increasing their number. */
             for (int i = 0; i < nThreadsi += 1) {
                 if ([i] == null) {
                     [i] = new FileProcessor
                         ( + '-' + (i + 1),
                          this);
                 }
             }
         }
 
          = cm.getLong
             (.);
         if ( == 0) {
              = cm.getLong
                 (.) / 4;
         }
 
          = cm.getBoolean
 
         /*
          * In addition to honoring the CLEANER_LAZY_MIGRATION, lazy migration
          * of LNs is disabled if CHECKPOINTER_HIGH_PRIORITY is true.  LN
          * migration slows down the checkpoint and so LNs are migrated by
          * FileProcessor when high priority checkpoints are configured.
          */
          =
             cm.getBoolean(.) &&
 
          = cm.getInt
     }
 
         return ;
     }
 
         return ;
     }
 
         return ;
     }
 
     public FileSelector getFileSelector() {
         return ;
     }
 
     public boolean getFetchObsoleteSize() {
         return ;
     }

    
 
     public boolean isRMWFixEnabled() {
         return ;
     }
 
     /* For unit testing only. */
     public void setFileChosenHook(TestHook hook) {
          = hook;
     }
 
     public CleanerLogSummary getLogSummary() {
         return .getLogSummary();
     }
 
     public void setLogSummary(CleanerLogSummary logSummary) {
         .setLogSummary(logSummary);
     }
 
     /*
      * Delegate the run/pause/wakeup/shutdown DaemonRunner operations.  We
      * always check for null to account for the possibility of exceptions
      * during thread creation.  Cleaner daemon can't ever be run if No Locking
      * mode is enabled.
      */
     public void runOrPause(boolean run) {
         if (!.isNoLocking()) {
             for (FileProcessor processor : ) {
                 if (processor != null) {
                     processor.runOrPause(run);
                 }
             }
         }
     }
 
     public void wakeup() {
         for (FileProcessor thread : ) {
             if (thread != null) {
                 thread.wakeup();
             }
         }
     }
 
     public void requestShutdown() {
         for (FileProcessor thread : ) {
             if (thread != null) {
                 thread.requestShutdown();
             }
         }
     }
 
     public void shutdown() {
         for (int i = 0; i < .i += 1) {
             if ([i] != null) {
                 [i].shutdown();
                 [i].clearEnv();
                 [i] = null;
             }
         }
     }
 
     public int getNWakeupRequests() {
         int count = 0;
         for (FileProcessor thread : ) {
             if (thread != null) {
                 count += thread.getNWakeupRequests();
             }
         }
         return count;
     }
 
     private boolean areThreadsRunning() {
         for (FileProcessor thread : ) {
             if (thread != null) {
                 return thread.isRunning();
             }
         }
         return false;
     }

    
 
     public void setExceptionListener(ExceptionListener exceptionListener) {
         for (FileProcessor thread : ) {
             if (thread != null) {
                 thread.setExceptionListener(exceptionListener);
             }
         }
     }

    
Cleans selected files and returns the number of files cleaned. This method is not invoked by a deamon thread, it is programatically.

Parameters:
cleanMultipleFiles is true to clean until we're under budget, or false to clean at most one file.
forceCleaning is true to clean even if we're not under the utilization threshold.
Returns:
the number of files cleaned, not including files cleaned unsuccessfully.
 
     public int doClean(boolean cleanMultipleFilesboolean forceCleaning)
         throws DatabaseException {
 
         FileProcessor processor = new FileProcessor
             (""this);
         return processor.doClean
             (false /*invokedFromDaemon*/cleanMultipleFilesforceCleaning);
     }

    
Load stats.
 
     public StatGroup loadStats(StatsConfig config) {
 
         if (!config.getFast()) {
             .set(.getTotalLogSize());
         }
 
 
         StatGroup copyStats = .cloneGroup(config.getClear());
         /* Add the FileSelector's stats to the cleaner stat group. */
         copyStats.addAll(.loadStats());
 
         return copyStats;
     }

    
Deletes all files that are safe-to-delete and which are not protected by a DbBackup or replication. Files are deleted only if there are no read-only processes. Log file deletion is coordinated by the use of three mechanisms: 1) To guard against read/only processes, the would-be deleter tries to get an exclusive lock on the environment. This will not be possible if a read/only process exists. File locks must be used for inter-process coordination. But note that file locks are not supported intra-process. 2) Synchronization on the protectedFileRanges field. Elements are added to and removed from the protectedFileRanges collection by DbBackup. More than one backup may be occuring at once, hence a collection of protectedFileRanges is maintained, and the files protected are the range starting with the minimum value returned by the objects in that collection. 3) In a replicated environment, files are protected from deletion by the CBVLSN (CleanerBarrier VLSN). No file greater or equal to the CBVLSN file may be deleted. For case (2) and (3), all coordinated activities -- replication, backup and file deletion -- can only be carried out by a read-write process, so we know that all activities are occurring in the same process because there can only be one JE read-write process per environment. This method is synchronized to prevent multiple threads from requesting the environment lock or deleting the same files.
 
     synchronized void deleteSafeToDeleteFiles()
         throws DatabaseException {
 
         /* Fail loudly if the environment is invalid. */
         .checkIfInvalid();
 
         /* Fail silently if the environment is not open. */
         if (.mayNotWrite()) {
             return;
         }
 
         SortedSet<LongsafeToDeleteFiles =
             .copySafeToDeleteFiles();
         if (safeToDeleteFiles == null) {
             return/* Nothing to do. */
         }
 
         /*
          * Ask HA to filter the "safe to delete" file set to determine which
          * are needed for HA purposes, and are protected.  We can safely assume
          * that if a file is declared to be unprotected by HA, and eligible to
          * delete, it will never be deemed on a later call to be protected.
          * This lets us avoid any synchronization between cleaning and HA.
          */
         SortedSet<LongunprotectedFiles =
             .getUnprotectedFileSet(safeToDeleteFiles);
         if (unprotectedFiles == null) {
 
             /*
              * The replicated node is not available, so the cleaner barrier can
              * not be read. Don't delete any files.
              */
             return;
         }
 
         if (unprotectedFiles.isEmpty()) {
             /* Leave a clue for analyzing log file deletion problems. */
             LoggerUtils.traceAndLog(.,
                                     "Cleaner has " + safeToDeleteFiles.size() +
                                     " files not deleted because they are " +
                                     "protected by replication.");
             return/* Nothing to do. */
         }
 
         /*
          * Truncate the entries in the VLSNIndex that reference VLSNs in the
          * files to be deleted.  [#16566]
          *
          * This is done prior to deleting the files to ensure that the
          * replicator removes the files from the VLSNIndex.  If we were to
          * truncate after deleting a file, we may crash before the truncation
          * and would have to "replay" the truncation later in
          * UtilizationProfile.populateCache.  This would be more complex and
          * the lastVLSN for the files would not be available.
          *
          * OTOH, if we crash after the truncation and before deleting a file,
          * it is very likely that we will re-clean the zero utilization file
          * and delete it later.  This will only cause a redundant truncation.
          *
          * This is done before locking the environment to minimize the interval
          * during which the environment is locked and read-only processes are
          * blocked.  We may unnecessarily truncate the VLSNIndex if we can't
          * lock the environment, but that is a lesser priority.
          *
          * We intentionally do not honor the protected file ranges specified by
          * DbBackups when truncating, because the VLSNIndex is protected only
          * by the CBVLSN.  Luckily, this also means we do not need to
          * synchronize on protectedFileRanges while truncating, and DbBackups
          * will not be blocked by this potentially expensive operation.
          */
         Long[] unprotectedFilesArray = unprotectedFiles.toArray(new Long[0]);
         for (int i = unprotectedFilesArray.length - 1; i >= 0; i -= 1) {
             Long fileNum = unprotectedFilesArray[i];
 
             /*
              * Truncate VLSNIndex for the highest numbered file with a VLSN. We
              * search from high to low because some files may not contain a
              * VLSN. If the truncate does have to do work, the VLSNIndex will
              * ensure that the change is fsynced to disk. [#20702]
              */
             VLSN lastVlsn = .getLastVLSN(fileNum);
             if ((lastVlsn != null) && !lastVlsn.isNull()) {
                 .vlsnHeadTruncate(lastVlsnfileNum);
                 break;
             }
         }
 
         /*
          * If we can't get an exclusive lock, then there are other processes
          * with the environment open read-only and we can't delete any files.
          */
         final FileManager fileManager = .getFileManager();
         if (!fileManager.lockEnvironment(falsetrue)) {
             LoggerUtils.traceAndLog(.,
                                     "Cleaner has " + safeToDeleteFiles.size() +
                                     " files not deleted because of read-only" +
                                     " processes.");
             return;
         }
 
         /* Be sure to release the environment lock in the finally block. */
         try {
             /* Synchronize while deleting files to block DbBackup.start. */
             synchronized () {
 
                 /* Intersect the protected ranges for active DbBackups. */
                 if (!.isEmpty()) {
                     unprotectedFiles = unprotectedFiles.headSet
                         (Collections.min());
                 }
 
                 /* Delete the unprotected files. */
                 for (final Iterator<Longiter = unprotectedFiles.iterator();
                      iter.hasNext();) {
                     final Long fileNum = iter.next();
                     final boolean deleted;
                     try {
                         if () {
                             deleted = fileManager.deleteFile(fileNum);
                         } else {
                             deleted = fileManager.renameFile
                                 (fileNum.);
                         }
                     } catch (IOException e) {
                         throw new EnvironmentFailureException
                             (.,
                              "Unable to delete or rename " + fileNume);
                     }
                     if (deleted) {
 
                         /*
                          * Deletion was successful.  Log a trace message for
                          * debugging of log cleaning behavior.
                          */
                         LoggerUtils.traceAndLog(.,
                                                 "Cleaner deleted file 0x" +
                                                 Long.toHexString(fileNum));
                     } else if (!fileManager.isFileValid(fileNum)) {
 
                         /*
                          * Somehow the file was previously deleted.  This could
                          * indicate an internal state error, and therefore we
                          * output a trace message.  But we should not
                          * repeatedly attempt to delete it, so we do remove it
                          * from the profile below.
                          */
                         LoggerUtils.traceAndLog
                             (.,
                              "Cleaner deleteSafeToDeleteFiles Log file 0x" +
                              Long.toHexString(fileNum) + " was previously " +
                              ( ? "deleted" : "renamed") + ".  State: " +
                              );
                     } else {
 
                         /*
                          * We will retry the deletion later if file still
                          * exists.  The deletion could have failed on Windows
                          * if the file was recently closed.  Remove the file
                          * from unprotectedFiles. That way, we won't remove it
                          * from the FileSelector's safe-to-delete set or the UP
                          * below, and we will retry the file deletion later.
                          */
                         iter.remove();
 
                         LoggerUtils.traceAndLog
                             (.,
                              "Cleaner deleteSafeToDeleteFiles Log file 0x" +
                              Long.toHexString(fileNum) + " could not be " +
                              ( ? "deleted" : "renamed") + ". This " +
                              "operation will be retried at the next " +
                              "checkpoint. State: " + );
                     }
                 }
             }
         } finally {
             fileManager.releaseExclusiveLock();
         }
 
         /*
          * Now unprotectedFiles contains only the files we deleted above.  We
          * can update the UP (and FileSelector) here outside of the
          * synchronization block and without the environment locked.  That way,
          * DbBackups and read-only processes will not be blocked by the
          * expensive UP operation.
          *
          * We do not retry if an error occurs deleting the UP database entries
          * below.  Retrying (when file deletion fails) is intended only to
          * solve a problem on Windows where deleting a log file isn't always
          * possible immediately after closing it.
          *
          * Remove the file from the UP before removing it from the
          * FileSelector's safe-to-delete set.  If we remove in the reverse
          * order, it may be selected for cleaning.  Always remove the file from
          * the safe-to-delete set (in a finally block) so that we don't attempt
          * to delete the file again.
          */
             (unprotectedFiles,
              .getCleanedDatabases(unprotectedFiles));
         for (Long fileNum : unprotectedFiles) {
             try {
                 .removePerFileMetadata(fileNum);
             } finally {
                 .removeDeletedFile
                     (fileNum.getMemoryBudget());
             }
             .increment();
         }
 
         /* Leave a clue for analyzing log file deletion problems. */
         if (safeToDeleteFiles.size() > unprotectedFiles.size()) {
             LoggerUtils.traceAndLog
                 (.,
                  "Cleaner has " +
                  (safeToDeleteFiles.size() - unprotectedFiles.size()) +
                  " files not deleted because they are protected by DbBackup " +
                  "or replication.");
         }
     }

    
Adds a range of log files to be protected from deletion during a backup or similar procedures where log files must not be deleted.

This method is called automatically by the com.sleepycat.je.util.DbBackup utility and is provided here as a separate API for advanced applications that may implement a custom backup procedure.

WARNING: After calling this method, deletion of log files in the file range by the JE log cleaner will be disabled until removeProtectedFileRange(long) is called. To prevent unbounded growth of disk usage, be sure to call removeProtectedFileRange(long) to re-enable log file deletion.

Parameters:
firstProtectedFile the number of the first file to be protected. The protected range is from this file number to the last (highest numbered) file in the log.
Since:
4.0
 
     public void addProtectedFileRange(long firstProtectedFile) {
         synchronized () {
             .add(firstProtectedFile);
         }
     }

    
Removes a range of log files to be protected after calling addProtectedFileRange(long).

Parameters:
firstProtectedFile the value previously passed to addProtectedFileRange(long).
Throws:
com.sleepycat.je.EnvironmentFailureException if firstProtectedFile is not currently the start of a protected range.
Since:
4.0
 
     public void removeProtectedFileRange(long firstProtectedFile) {
         synchronized () {
             if (!.remove(firstProtectedFile)) {
                 throw EnvironmentFailureException.unexpectedState
                     ("File range starting with 0x" +
                      Long.toHexString(firstProtectedFile) +
                      " is not currently protected");
             }
         }
     }

    
Returns a copy of the cleaned and processed files at the time a checkpoint starts.

If non-null is returned, the checkpoint should flush an extra level, and addCheckpointedFiles() should be called when the checkpoint is complete.

 
         throws DatabaseException {
 
         /* Pending LNs can prevent file deletion. */
         processPending();
 
         return .getFilesAtCheckpointStart();
     }

    
When a checkpoint is complete, update the files that were returned at the beginning of the checkpoint.
 
         throws DatabaseException {
 
         deleteSafeToDeleteFiles();
     }

    
Update the lowUtilizationFiles and toBeCleanedFiles fields with new read-only collections.
 
 
 
         /* Check for backlog growth after updating toBeCleanedFiles. */
         final Set<LongoldToBeCleaned = ;
         final Set<LongnewToBeCleaned = .getToBeCleanedFiles();
          = newToBeCleaned;
         checkBacklogGrowth(oldToBeCleaned.size(), newToBeCleaned.size());
     }

    
Use previous and current to-be-cleaned files to check backlog growth. [#21111]

A SEVERE message is logged when the trailing average increases. A growing backlog is normally due to an undersized cache, and the hope is that the application/user/admin will take action when the SEVERE message is seen.

Multiple cleaner threads may call this method concurrently, so multiple messages may be logged at around the same time. This is considered to be acceptable. This method is called only once for each file that is cleaned by FileProcessor, which limits the number of messages logged.

 
     private void checkBacklogGrowth(int oldBacklogint newBacklog) {
 
         final boolean averagesAreValid;
         final float oldAvg;
         final float newAvg;
             
         /* Protect access to backlogAlertList. */
         synchronized () {
 
             /*
              * Averages are considered valid when we have required number of
              * recent backlog values for the old and new average computations.
              */
             averagesAreValid =
                 (.size() >= );
 
             oldAvg = getAverage();
 
             /* Append new value and remove older value(s). */
             .addLast(newBacklog);
             while (.size() > ) {
                 .removeFirst();
             }
 
             newAvg = getAverage();
         }
 
        /* Don't log when the current backlog is below the alert floor. */
        if (newBacklog < ) {
            return;
        }
        /* Don't log when the backlog doesn't grow. */
        if (newBacklog <= oldBacklog) {
            return;
        }
        /* Don't log unless averages are valid and increasing. */
        if (!averagesAreValid || newAvg <= oldAvg) {
            return;
        }
        final String msg =String.format
             ("Average cleaner backlog has grown from %.1f to %.1f. If the " +
              "cleaner continues to be unable to make progress, the JE " +
              "cache size and/or number of cleaner threads are probably too " +
              "small. If this is not corrected, eventually all available " +
              "disk space will be used."oldAvgnewAvg);
        LoggerUtils.logMsg(.msg);
    }
    private static float getAverage(Collection<Integerintegers) {
        float total = 0;
        for (int i : integers) {
            total += i;
        }
        return total / integers.size();
    }

    
If any LNs are pending, process them. This method should be called often enough to prevent the pending LN set from growing too large.
    void processPending()
        throws DatabaseException {
        /*
         * This method is not synchronized because that would block cleaner
         * and checkpointer threads unnecessarily.  However, we do prevent
         * reentrancy, for two reasons:
         * 1. It is wasteful for two threads to process the same pending
         *    entries.
         * 2. Many threads calling getDb may increase the liklihood of
         *    livelock. [#20816]
         */
        if (!.compareAndSet(falsetrue)) {
            return;
        }
        try {
            DbTree dbMapTree = .getDbTree();
            Map<LongLNInfopendingLNs = .getPendingLNs();
            if (pendingLNs != null) {
                TreeLocation location = new TreeLocation();
                for (Map.Entry<LongLNInfoentry : pendingLNs.entrySet()) {
                    long originalLsn = entry.getKey();
                    LNInfo info = entry.getValue();
                    DatabaseId dbId = info.getDbId();
                    DatabaseImpl db = dbMapTree.getDb(dbId);
                    try {
                        byte[] key = info.getKey();
                        LN ln = info.getLN();
                        /* Evict before processing each entry. */
                        if () {
                            .daemonEviction(true /*backgroundIO*/);
                        }
                        processPendingLN(originalLsnlndbkeylocation);
                    } finally {
                        dbMapTree.releaseDb(db);
                    }
                    /* Sleep if background read/write limit was exceeded. */
                    .sleepAfterBackgroundIO();
                }
            }
            DatabaseId[] pendingDBs = .getPendingDBs();
            if (pendingDBs != null) {
                for (DatabaseId dbId : pendingDBs) {
                    DatabaseImpl db = dbMapTree.getDb(dbId);
                    try {
                        if (db == null || db.isDeleteFinished()) {
                            .removePendingDB(dbId);
                        }
                    } finally {
                        dbMapTree.releaseDb(db);
                    }
                }
            }
        } finally {
            .set(false);
        }
    }

    
Processes a pending LN, getting the lock first to ensure that the overhead of retries is mimimal.
    private void processPendingLN(long originalLsn,
                                  LN ln,
                                  DatabaseImpl db,
                                  byte[] key,
                                  TreeLocation location)
        throws DatabaseException {
        boolean parentFound = false;  // We found the parent BIN.
        boolean processedHere = true// The LN was cleaned here.
        boolean lockDenied = false;   // The LN lock was denied.
        boolean obsolete = false;     // The LN is no longer in use.
        boolean completed = false;    // This method completed.
        BasicLocker locker = null;
        BIN bin = null;
        try {
            .increment();
            /*
             * If the DB is gone, this LN is obsolete.  If delete cleanup is in
             * progress, put the DB into the DB pending set; this LN will be
             * declared deleted after the delete cleanup is finished.
             */
            if (db == null || db.isDeleted()) {
                addPendingDB(db);
                .increment();
                obsolete = true;
                completed = true;
                return;
            }
            Tree tree = db.getTree();
            assert tree != null;
            /*
             * Get a non-blocking read lock on the original log LSN.  If this
             * fails, then the original LSN is still write-locked.  We may have
             * to lock again, if the LSN has changed in the BIN, but this
             * initial check prevents a Btree lookup in some cases.
             */
            locker = BasicLocker.createBasicLocker(false /*noWait*/);
            /* Don't allow this short-lived lock to be preempted/stolen. */
            locker.setPreemptable(false);
            LockResult lockRet =
                locker.nonBlockingLock(originalLsn.,
                                       false /*jumpAheadOfWaiters*/db);
            if (lockRet.getLockGrant() == .) {
                /* Try again later. */
                .increment();
                lockDenied = true;
                completed = true;
                return;
            }
            /*
             * Search down to the bottom most level for the parent of this LN.
             */
            parentFound = tree.getParentBINForChildLN
                (locationkeyfalse /*splitsAllowed*/,
                 true /*findDeletedEntries*/);
            bin = location.bin;
            int index = location.index;
            if (!parentFound) {
                .increment();
                obsolete = true;
                completed = true;
                return;
            }
            /* Migrate an LN. */
            processedHere = false;
            migrateLN
                (dbbin.getLsn(index), binindex,
                 true,           // wasCleaned
                 true,           // isPending
                 originalLsn,
                 true,           // backgroundIO
                 );
            completed = true;
        } catch (DatabaseException DBE) {
            DBE.printStackTrace();
            LoggerUtils.traceAndLogException
                ("com.sleepycat.je.cleaner.Cleaner",
                 "processLN""Exception thrown: "DBE);
            throw DBE;
        } finally {
            if (bin != null) {
                bin.releaseLatch();
            }
            if (locker != null) {
                locker.operationEnd();
            }
            /*
             * If migrateLN was not called above, remove the pending LN and
             * perform tracing in this method.
             */
            if (processedHere) {
                if (completed && !lockDenied) {
                    .removePendingLN(originalLsn);
                }
                logFine(ln.,
                        completedobsoletefalse /*migrated*/);
            }
        }
    }

    
Returns whether the given BIN entry may be stripped by the evictor. True is always returned if the BIN is not dirty. False is returned if the BIN is dirty and the entry will be migrated soon.

Parameters:
latched is true if the BIN is latched and an exact answer should be returned. Is false if the BIN may not be latched; returning the wrong answer is OK in that case (it will be called again later when latched), but an exception should not occur.
    public boolean isEvictable(final BIN bin,
                               final int index,
                               final boolean latched) {
        if (bin.getDirty()) {
            if (bin.getMigrate(index)) {
                return false;
            }
            /* Cannot get LSN safely if the BIN is not latched. */
            if (!latched) {
                return true;
            }
            final long lsn = bin.getLsn(index);
            if (lsn == .) {
                /*
                 * LN is resident but never logged, no cleaning restrictions
                 * apply.
                 */
                return true;
            }
            final Long fileNum = Long.valueOf(DbLsn.getFileNumber(lsn));
            /*
             * Assume foreground eviction for now.  If we resurrect the
             * background eviction thread, the backgroundIO parameter should be
             * passed down and used here.
             */
            if ( &&
                .contains(fileNum)) {
                return false;
            }
            if (( || ) &&
                .contains(fileNum)) {
                return false;
            }
        }
        return true;
    }
<