Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
  /*
   *  Copyright (c) 2012 Jan Kotek
   *
   *  Licensed under the Apache License, Version 2.0 (the "License");
   *  you may not use this file except in compliance with the License.
   *  You may obtain a copy of the License at
   *
   *    http://www.apache.org/licenses/LICENSE-2.0
   *
  *  Unless required by applicable law or agreed to in writing, software
  *  distributed under the License is distributed on an "AS IS" BASIS,
  *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  *  See the License for the specific language governing permissions and
  *  limitations under the License.
  */
 
 package org.mapdb;

Centerpiece for record management, `Engine` is simple key value store. Engine is low-level interface and is not meant to be used directly by user. For most operations user should use DB class. In this store key is primitive `long` number, typically pointer to index table. Value is class instance. To turn value into/from binary form serializer is required as extra argument for most operations. Unlike other DBs MapDB does not expect user to (de)serialize data before they are passed as arguments. Instead MapDB controls (de)serialization itself. This gives DB a lot of flexibility: for example instances may be held in cache to minimise number of deserializations, or modified instance can be placed into queue and asynchronously written on background thread. There is Store subinterface for raw persistence Most of MapDB features comes from EngineWrappers, which are stacked on top of each other to provide asynchronous writes, instance cache, encryption etc.. `Engine` stack is very elegant and uniform way to handle additional functionality. Other DBs need an ORM framework to achieve similar features. In default configuration MapDB runs with this `Engine` stack: * **DISK** - raw file or memory * StoraWAL - permanent record store with transactions * AsyncWriteEngine - asynchronous writes to storage * EngineWrapper.ByteTransformEngine - compression or encryption (optional) * Caches.HashTable - instance cache * SnapshotEngine - support for snapshots * **USER** - DB and collections Engine uses `recid` to identify records. There is zero error handling in case recid is invalid (random number or already deleted record). Passing illegal recid may result into anything (return null, throw EOF or even corrupt store). Engine is considered low-level component and it is responsibility of upper layers (collections) to ensure recid is consistent. Lack of error handling is trade of for speed (similar way as manual memory management in C++)

Engine must support `null` record values. You may insert, update and fetch null records. Nulls play important role in recid preallocation and asynchronous writes.

Recid can be reused after it was deleted. If your application relies on unique being unique, you should update record with null value, instead of delete. Null record consumes only 8 bytes in store and is preserved during defragmentation.

Author(s):
Jan Kotek
 
 public interface Engine {
 
     long CATALOG_RECID = 1;
     long CLASS_INFO_RECID = 2;
     long CHECK_RECORD = 3;
     long LAST_RESERVED_RECID = 7;


    
Insert new record.

Parameters:
value records to be added
serializer used to convert record into/from binary form
<A> type of record
Returns:
recid (record identifier) under which record is stored.
 
     <A> long put(A valueSerializer<A> serializer);

    
Get existing record.

Recid must be a number returned by 'put' method. Behaviour for invalid recid (random number or already deleted record) is not defined, typically it returns null or throws 'EndOfFileException'

Parameters:
recid (record identifier) under which record was persisted
serializer used to deserialize record from binary form
<A> record type
Returns:
record matching given recid, or null if record is not found under given recid.
 
     <A> A get(long recidSerializer<A> serializer);

    
Update existing record with new value.

Recid must be a number returned by 'put' method. Behaviour for invalid recid (random number or already deleted record) is not defined, typically it throws 'EndOfFileException', but it may also corrupt store.

Parameters:
recid (record identifier) under which record was persisted.
value new record value to be stored
serializer used to serialize record into binary form
<A> record type
    <A> void update(long recid, A valueSerializer<A> serializer);


    
Updates existing record in atomic (Compare And Swap) manner. Value is modified only if old value matches expected value. There are three ways to match values, MapDB may use any of them:
  1. Equality check oldValue==expectedOldValue when old value is found in instance cache
  2. Deserializing oldValue using serializer and checking oldValue.equals(expectedOldValue)
  3. Serializing expectedOldValue using serializer and comparing binary array with already serialized oldValue

Recid must be a number returned by 'put' method. Behaviour for invalid recid (random number or already deleted record) is not defined, typically it throws 'EndOfFileException', but it may also corrupt store.

Parameters:
recid (record identifier) under which record was persisted.
expectedOldValue old value to be compared with existing record
newValue to be written if values are matching
serializer used to serialize record into binary form
<A>
Returns:
true if values matched and newValue was written
    <A> boolean compareAndSwap(long recid, A expectedOldValue, A newValueSerializer<A> serializer);

    
Remove existing record from store/cache

Recid must be a number returned by 'put' method. Behaviour for invalid recid (random number or already deleted record) is not defined, typically it throws 'EndOfFileException', but it may also corrupt store.

Parameters:
recid (record identifier) under which was record persisted
serializer which may be used in some circumstances to deserialize and store old object
    <A> void delete(long recidSerializer<A>  serializer);



    
Close store/cache. This method must be called before JVM exits to flush all caches and prevent store corruption. Also it releases resources used by MapDB (disk, memory..).

Engine can no longer be used after this method was called. If Engine is used after closing, it may throw any exception including NullPointerException

There is an configuration option DBMaker.closeOnJvmShutdown() which uses shutdown hook to automatically close Engine when JVM shutdowns.
    void close();


    
Checks whether Engine was closed.

Returns:
true if engine was closed
    public boolean isClosed();

    
Makes all changes made since the previous commit/rollback permanent. In transactional mode (on by default) it means creating journal file and replaying it to storage. In other modes it may flush disk caches or do nothing at all (check your config options)
    void commit();

    
Undoes all changes made in the current transaction. If transactions are disabled it throws java.lang.UnsupportedOperationException.

Throws:
java.lang.UnsupportedOperationException if transactions are disabled
    void rollback() throws UnsupportedOperationException;

    
Check if you can write into this Engine. It may be readonly in some cases (snapshot, read-only files).

Returns:
true if engine is read-only
    boolean isReadOnly();

    

Returns:
true if engine supports rollbac
    boolean canRollback();

    
clears any underlying cache
    void clearCache();
    void compact();
New to GrepCode? Check out our FAQ X