Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
  // Copyright 2010 Google Inc.
  // Licensed under the Apache License, Version 2.0 (the "License");
  // you may not use this file except in compliance with the License.
  // You may obtain a copy of the License at
  // Unless required by applicable law or agreed to in writing, software
 // distributed under the License is distributed on an "AS IS" BASIS,
 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 // See the License for the specific language governing permissions and
 // limitations under the License.
 import java.util.List;

Base implementation for in a GenericDatabase.

NoSQL implementors must extend this class and provide implementations for the abstract methods declared here. Each schema instance will wrap one thread's connection to the data store. Therefore, unlike database, this class does not need to be thread-safe.

 public abstract class GenericSchema extends NoSqlSchema {
   private final GenericDatabase<?, ?, ?> db;
   protected GenericSchema(final GenericDatabase<?, ?, ?> d) {
      = d;
   public void flush() throws OrmException {


the database that created this schema instance.
   public GenericDatabase<?, ?, ?> getDatabase() {
     return ;

Allocate a new unique value from a pool of values.

This method is only required to return a unique value per invocation. Implementors may override the method to provide an implementation that returns values out of order.

The default implementation of this method stores a under the row key ".sequence." + poolName, and updates it through the atomic semantics of atomicUpdate(byte[], If the row does not yet exist, it is initialized and the value 1 is returned.

poolName name of the value pool to allocate from. This is typically the name of a sequence in the schema.
a new unique value.
Throws: a unique value cannot be obtained.
   protected long nextLong(final String poolNamethrows OrmException {
     IndexKeyBuilder b = new IndexKeyBuilder();
     b.add(".sequence." + poolName);
     try {
       final long[] res = new long[1];
       atomicUpdate(b.toByteArray(), new AtomicUpdate<byte[]>() {
         public byte[] update(byte[] val) {
           CounterShard ctr;
           if (val != null) {
             ctr = ..decode(val);
           } else {
             long start = 1;
             for (SequenceModel s : getDatabase().getSchemaModel()
                 .getSequences()) {
               if (poolName.equals(s.getSequenceName())) {
                 start = s.getSequence().startWith();
                 if (start == 0) {
                   start = 1;
             ctr = new CounterShard(start.);
          if (ctr.isEmpty()) {
            throw new NoMoreValues();
          res[0] =;
          return ..encodeToByteString(ctr).toByteArray();
      return res[0];
    } catch (NoMoreValues err) {
      throw new OrmException("Counter '" + poolName + "' out of values");

Fetch one row's data.

The default implementation of this method creates a pair of keys and passes them to scan(byte[],byte[],int,boolean). The fromKey is the supplied key, while the toKey has '\0' appended onto key. If more than one row matches in that range, the method throws an exception.

key key of the row to fetch and return.
the data stored under key; null if no row exists.
Throws: more than one row was identified in the key scan. the data store cannot process the request.
  public byte[] fetchRow(byte[] keythrows OrmDuplicateKeyException,
      OrmException {
    final byte[] fromKey = key;
    final byte[] toKey = new byte[key.length + 1];
    System.arraycopy(key, 0, toKey, 0, key.length);
    ResultSet<Rowr = scan(fromKeytoKey, 2, false);
    try {
      Iterator<Rowi = r.iterator();
      if (!i.hasNext()) {
        return null;
      byte[] data =;
      if (i.hasNext()) {
        throw new OrmDuplicateKeyException("Unexpected duplicate keys");
      return data;
    } finally {

Fetch multiple rows at once.

The default implementation of this method is a simple iteration over each key and executes a sequential fetch with fetchRow(byte[]).

keys keys to fetch and return.
iteration over the rows that exist and appear in keys.
Throws: the data store cannot process the request.
  public ResultSet<RowfetchRows(Iterable<byte[]> keysthrows OrmException {
    List<Rowr = new ArrayList<Row>();
    for (byte[] key : keys) {
      byte[] val = fetchRow(key);
      if (val != null) {
        r.add(new Row(keyval));
    return new ListResultSet<Row>(r);

Scan a range of keys and return any matching objects.

To fetch a single record with a scan, set toKey to the same array as fromKey, but append a trailing NUL byte (0x00). The caller should validate that the returned ResultSet contains no more than 1 row.

The resulting iteration does not support remove.

Each iteration element is a map entry, describing the row key and the row value. The map entry's value cannot be changed.

fromKey key to start the scan on. This is inclusive.
toKey key to stop the scan on. This is exclusive.
limit maximum number of results to return.
order if true the order will be preserved, false if the result order order can be arbitrary.
result iteration for the requested range. The result set may be lazily filled, or filled completely.
Throws: an error occurred preventing the scan from completing.
  public abstract ResultSet<Rowscan(byte[] fromKeybyte[] toKeyint limit,
      boolean orderthrows OrmException;

Atomically insert one row, failing if the row already exists.

The default implementation of this method relies upon the atomic nature of the atomicUpdate(byte[], primitive to test for the row's existence, and create the row only if it is not found.

key key of the new row to insert.
newData data of the new row.
Throws: another row already exists with the specified key. the data store cannot process the request right now, for example due to a network connectivity problem.
  public void insert(byte[] keyfinal byte[] newData)
      throws OrmDuplicateKeyExceptionOrmException {
    try {
      atomicUpdate(keynew AtomicUpdate<byte[]>() {
        public byte[] update(byte[] oldData) {
          if (oldData != null) {
            throw new KeyExists();
          return newData;
    } catch (KeyExists err) {
      throw new OrmDuplicateKeyException("Duplicate key");

Update a single row, inserting it if it does not exist.

Unlike insert, this method always succeeds.

key key of the row to update, or insert if missing.
data data to store at this row.
Throws: the data store cannot process the request, for example due to a network connectivity problem.
  public abstract void upsert(byte[] keybyte[] datathrows OrmException;

Delete the row stored under the given key.

If the row does not exist, this method must complete successfully anyway. The intent of the caller is to ensure the row does not exist when the method completes, and a row that did not exist satisfies that intent.

key the key to delete.
Throws: the data store cannot perform the removal.
  public abstract void delete(byte[] keythrows OrmException;

Atomically read and update a single row.

Unlike schema's atomicUpdate() method, this method must handle missing rows. Implementations must be logically equivalent to the following, but performed atomically within the scope of the single row key:

 byte[] oldData = get(key);
 byte[] newData = update.update(oldData);
 if (newData != null) {
   upsert(key, newData);
 } else if (oldData != null) {
 return data;

Secondary index row updates are assumed to never be part of the atomic update transaction. This is an intentional design decision to fit with many NoSQL product's limitations to support only single-row atomic updates.

The update method may be invoked multiple times before the operation is considered successful. This permits an implementation to perform an opportunistic update attempt, and retry the update if the same row was modified by another concurrent worker.

key the row key to read, update and return.
update action to perform on the row's data element. The action may be passed null if the row doesn't exist.
Throws: the database cannot perform the update.
  public abstract void atomicUpdate(byte[] keyAtomicUpdate<byte[]> update)
      throws OrmException;

Check (and delete) an index row if its a fossil.

As index rows are written ahead of the main data row being written out, scans sometimes see an index row that does not match the data row. These are ignored for a short period (GenericDatabase.getMaxFossilAge()) to allow the primary data row to eventually get written out. If however the writer never finished the update, these index rows are stale and need to be pruned. Any index row older than the fossil age is removed by this method.

now timestamp when the current scan started.
key the index row key.
row the index row data.
  public void maybeFossilCollectIndexRow(long nowbyte[] keyIndexRow row) {
    if (row.getTimestamp() + .getMaxFossilAge() <= now) {

Delete the given fossil index row.

This method is logically the same as delete(byte[]), but its separated out to permit asynchronous delivery of the delete events since these are arriving during an index scan and are less time-critical than other delete operations.

The default implementation of this method calls delete(byte[]).

key index key to remove.
row the index row data.
  protected void fossilCollectIndexRow(byte[] keyIndexRow row) {
    try {
    } catch (OrmException e) {
      // Ignore a fossil delete error.
  private static class KeyExists extends RuntimeException {
  private static class NoMoreValues extends RuntimeException {
New to GrepCode? Check out our FAQ X