Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove SQL support for Ledger State #4504

Closed
wants to merge 8 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ stellar-core generates several types of data that can be used by applications, d

Full [Ledger](ledger.md) snapshots are available in both:
* [history archives](history.md) (checkpoints, every 64 ledgers, updated every 5 minutes)
* in the case of captive-core (enabled via the `--in-memory` command line option) the ledger is maintained within the stellar-core process and ledger-state need to be tracked as it changes via "meta" updates.
* in the case of captive-core the ledger is maintained within the stellar-core process and ledger-state need to be tracked as it changes via "meta" updates.

## Ledger State transition information (transactions, etc)

Expand Down
17 changes: 3 additions & 14 deletions docs/quick-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,27 +147,16 @@ transactions or ledger states) must be downloaded and verified sequentially. It
worthwhile to save and reuse such a trusted reference file multiple times before regenerating it.

##### Experimental fast "meta data generation"
`catchup` has a command line flag `--in-memory` that when combined with the
`METADATA_OUTPUT_STREAM` allows a stellar-core instance to stream meta data instead
of using a database as intermediate store.
`catchup` when combined with the
`METADATA_OUTPUT_STREAM` allows a stellar-core instance to stream meta data.

This has been tested as being orders of magnitude faster for replaying large sections
of history.

If you don't specify any value for stream the command will just replay transactions
in memory and throw away all meta. This can be useful for performance testing the transaction processing subsystem.

The `--in-memory` flag is also supported by the `run` command, which can be used to
run a lightweight, stateless validator or watcher node, and this can be combined with
`METADATA_OUTPUT_STREAM` to stream network activity to another process.

By default, such a stateless node in `run` mode will catch up to the network starting from the
network's most recent checkpoint, but this behaviour can be further modified using two flags
(that must be used together) called `--start-at-ledger <N>` and `--start-at-hash <HEXHASH>`. These
cause the node to start with a fast in-memory catchup to ledger `N` with hash `HEXHASH`, and then
replay ledgers forward to the current state of the network.

A stateless and meta-streaming node can additionally be configured with
A meta-streaming node can additionally be configured with
`EXPERIMENTAL_PRECAUTION_DELAY_META=true` (if unspecified, the default is
`false`). If `EXPERIMENTAL_PRECAUTION_DELAY_META` is `true`, then the node will
delay emitting meta for a ledger `<N>` until the _next_ ledger, `<N+1>`, closes.
Expand Down
8 changes: 1 addition & 7 deletions docs/software/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,13 +159,7 @@ apply.
checkpoint from a history archive.
* **run**: Runs stellar-core service.<br>
Option **--wait-for-consensus** lets validators wait to hear from the network
before participating in consensus.<br>
(deprecated) Option **--in-memory** stores the current ledger in memory rather than a
database.<br>
(deprecated) Option **--start-at-ledger <N>** starts **--in-memory** mode with a catchup to
ledger **N** then replays to the current state of the network.<br>
(deprecated) Option **--start-at-hash <HASH>** provides a (mandatory) hash for the ledger
**N** specified by the **--start-at-ledger** option.
before participating in consensus.
* **sec-to-pub**: Reads a secret key on standard input and outputs the
corresponding public key. Both keys are in Stellar's standard
base-32 ASCII format.
Expand Down
20 changes: 1 addition & 19 deletions docs/stellar-core_example.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -229,14 +229,6 @@ FLOOD_DEMAND_BACKOFF_DELAY_MS = 500
# against each other.
MAX_DEX_TX_OPERATIONS_IN_TX_SET = 0

# DEPRECATED_SQL_LEDGER_STATE (bool) default false
# When set to true, SQL is used to store all ledger state instead of
# BucketListDB. This is not recommended and may cause performance degregradation.
# This is deprecated and will be removed in the future. Note that offers table
# is still maintained in SQL when this is set to false, but all other ledger
# state tables are dropped.
DEPRECATED_SQL_LEDGER_STATE = false

# BUCKETLIST_DB_INDEX_PAGE_SIZE_EXPONENT (Integer) default 14
# Determines page size used by BucketListDB for range indexes, where
# pageSize == 2^BUCKETLIST_DB_INDEX_PAGE_SIZE_EXPONENT. If set to
Expand All @@ -258,11 +250,6 @@ BUCKETLIST_DB_INDEX_CUTOFF = 20
# this value is ingnored and indexes are never persisted.
BUCKETLIST_DB_PERSIST_INDEX = true

# BACKGROUND_EVICTION_SCAN (bool) default true
# Determines whether eviction scans occur in the background thread. Requires
# that DEPRECATED_SQL_LEDGER_STATE is set to false.
BACKGROUND_EVICTION_SCAN = true

# EXPERIMENTAL_BACKGROUND_OVERLAY_PROCESSING (bool) default false
# Determines whether some of overlay processing occurs in the background
# thread.
Expand Down Expand Up @@ -601,17 +588,12 @@ MAX_SLOTS_TO_REMEMBER=12
# only a passive "watcher" node.
METADATA_OUTPUT_STREAM=""

# Setting EXPERIMENTAL_PRECAUTION_DELAY_META to true causes a stateless node
# Setting EXPERIMENTAL_PRECAUTION_DELAY_META to true causes a node
# which is streaming meta to delay streaming the meta for a given ledger until
# it closes the next ledger. This ensures that if a local bug had corrupted the
# given ledger, then the meta for the corrupted ledger will never be emitted, as
# the node will not be able to reach consensus with the network on the next
# ledger.
#
# Setting EXPERIMENTAL_PRECAUTION_DELAY_META to true in combination with a
# non-empty METADATA_OUTPUT_STREAM (which can be configured on the command line
# as well as in the config file) requires an in-memory database (specified by
# using --in-memory on the command line).
EXPERIMENTAL_PRECAUTION_DELAY_META=false

# Number of ledgers worth of transaction metadata to preserve on disk for
Expand Down
1 change: 0 additions & 1 deletion docs/stellar-core_example_validators.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ PUBLIC_HTTP_PORT=false
NETWORK_PASSPHRASE="Example configuration"

DATABASE="sqlite3://example.db"
DEPRECATED_SQL_LEDGER_STATE = false

NODE_SEED="SA7FGJMMUIHNE3ZPI2UO5I632A7O5FBAZTXFAIEVFA4DSSGLHXACLAIT a3"
NODE_HOME_DOMAIN="domainA"
Expand Down
1 change: 0 additions & 1 deletion docs/stellar-core_standalone.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ NODE_IS_VALIDATOR=true

#DATABASE="postgresql://dbname=stellar user=postgres password=password host=localhost"
DATABASE="sqlite3://stellar.db"
DEPRECATED_SQL_LEDGER_STATE = false

COMMANDS=["ll?level=debug"]

Expand Down
1 change: 0 additions & 1 deletion docs/stellar-core_testnet.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ PUBLIC_HTTP_PORT=false
NETWORK_PASSPHRASE="Test SDF Network ; September 2015"

DATABASE="sqlite3://stellar.db"
DEPRECATED_SQL_LEDGER_STATE = false

# Stellar Testnet validators
[[HOME_DOMAINS]]
Expand Down
1 change: 0 additions & 1 deletion docs/stellar-core_testnet_legacy.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ KNOWN_PEERS=[
"core-testnet3.stellar.org"]

DATABASE="sqlite3://stellar.db"
DEPRECATED_SQL_LEDGER_STATE = false
UNSAFE_QUORUM=true
FAILURE_SAFETY=1

Expand Down
1 change: 0 additions & 1 deletion docs/stellar-core_testnet_validator.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ PUBLIC_HTTP_PORT=false
NETWORK_PASSPHRASE="Test SDF Network ; September 2015"

DATABASE="sqlite3://stellar.db"
DEPRECATED_SQL_LEDGER_STATE = false

# Configuring the node as a validator
# note that this is an unsafe configuration in this particular setup:
Expand Down
120 changes: 46 additions & 74 deletions src/bucket/BucketApplicator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include "ledger/LedgerTxn.h"
#include "ledger/LedgerTxnEntry.h"
#include "main/Application.h"
#include "util/GlobalChecks.h"
#include "util/Logging.h"
#include "util/types.h"
#include <fmt/format.h>
Expand All @@ -21,14 +22,12 @@ BucketApplicator::BucketApplicator(Application& app,
uint32_t minProtocolVersionSeen,
uint32_t level,
std::shared_ptr<LiveBucket const> bucket,
std::function<bool(LedgerEntryType)> filter,
std::unordered_set<LedgerKey>& seenKeys)
: mApp(app)
, mMaxProtocolVersion(maxProtocolVersion)
, mMinProtocolVersionSeen(minProtocolVersionSeen)
, mLevel(level)
, mBucketIter(bucket)
, mEntryTypeFilter(filter)
, mSeenKeys(seenKeys)
{
auto protocolVersion = mBucketIter.getMetadata().ledgerVersion;
Expand All @@ -40,8 +39,8 @@ BucketApplicator::BucketApplicator(Application& app,
protocolVersion, mMaxProtocolVersion));
}

// Only apply offers if BucketListDB is enabled
if (mApp.getConfig().isUsingBucketListDB() && !bucket->isEmpty())
// Only apply offers
if (!bucket->isEmpty())
{
auto offsetOp = bucket->getOfferRange();
if (offsetOp)
Expand All @@ -62,10 +61,8 @@ BucketApplicator::operator bool() const
{
// There is more work to do (i.e. (bool) *this == true) iff:
// 1. The underlying bucket iterator is not EOF and
// 2. Either BucketListDB is not enabled (so we must apply all entry types)
// or BucketListDB is enabled and we have offers still remaining.
return static_cast<bool>(mBucketIter) &&
(!mApp.getConfig().isUsingBucketListDB() || mOffersRemaining);
// 2. We have offers still remaining.
return static_cast<bool>(mBucketIter) && mOffersRemaining;
}

size_t
Expand All @@ -81,20 +78,19 @@ BucketApplicator::size() const
}

static bool
shouldApplyEntry(std::function<bool(LedgerEntryType)> const& filter,
BucketEntry const& e)
shouldApplyEntry(BucketEntry const& e)
{
if (e.type() == LIVEENTRY || e.type() == INITENTRY)
{
return filter(e.liveEntry().data.type());
return BucketIndex::typeNotSupported(e.liveEntry().data.type());
}

if (e.type() != DEADENTRY)
{
throw std::runtime_error(
"Malformed bucket: unexpected non-INIT/LIVE/DEAD entry.");
}
return filter(e.deadEntry().type());
return BucketIndex::typeNotSupported(e.deadEntry().type());
}

size_t
Expand All @@ -110,11 +106,13 @@ BucketApplicator::advance(BucketApplicator::Counters& counters)
// directly instead of creating a temporary inner LedgerTxn
// as "advance" commits changes during each step this does not introduce any
// new failure mode
#ifdef BUILD_TESTS
if (mApp.getConfig().MODE_USES_IN_MEMORY_LEDGER)
{
ltx = static_cast<AbstractLedgerTxn*>(&root);
}
else
#endif
{
innerLtx = std::make_unique<LedgerTxn>(root, false);
ltx = innerLtx.get();
Expand All @@ -127,8 +125,7 @@ BucketApplicator::advance(BucketApplicator::Counters& counters)
// returns the file offset at the end of the currently loaded entry.
// This means we must read until pos is strictly greater than the upper
// bound so that we don't skip the last offer in the range.
auto isUsingBucketListDB = mApp.getConfig().isUsingBucketListDB();
if (isUsingBucketListDB && mBucketIter.pos() > mUpperBoundOffset)
if (mBucketIter.pos() > mUpperBoundOffset)
{
mOffersRemaining = false;
break;
Expand All @@ -137,89 +134,64 @@ BucketApplicator::advance(BucketApplicator::Counters& counters)
BucketEntry const& e = *mBucketIter;
LiveBucket::checkProtocolLegality(e, mMaxProtocolVersion);

if (shouldApplyEntry(mEntryTypeFilter, e))
if (shouldApplyEntry(e))
{
if (isUsingBucketListDB)
if (e.type() == LIVEENTRY || e.type() == INITENTRY)
{
if (e.type() == LIVEENTRY || e.type() == INITENTRY)
{
auto [_, wasInserted] =
mSeenKeys.emplace(LedgerEntryKey(e.liveEntry()));
auto [_, wasInserted] =
mSeenKeys.emplace(LedgerEntryKey(e.liveEntry()));

// Skip seen keys
if (!wasInserted)
{
continue;
}
}
else
// Skip seen keys
if (!wasInserted)
{
// Only apply INIT and LIVE entries
mSeenKeys.emplace(e.deadEntry());
continue;
}
}
else
{
// Only apply INIT and LIVE entries
mSeenKeys.emplace(e.deadEntry());
continue;
}

counters.mark(e);

if (e.type() == LIVEENTRY || e.type() == INITENTRY)
// DEAD and META entries skipped
releaseAssert(e.type() == LIVEENTRY || e.type() == INITENTRY);
// The last level can have live entries, but at that point we
// know that they are actually init entries because the earliest
// state of all entries is init, so we mark them as such here
if (mLevel == LiveBucketList::kNumLevels - 1 &&
e.type() == LIVEENTRY)
{
// The last level can have live entries, but at that point we
// know that they are actually init entries because the earliest
// state of all entries is init, so we mark them as such here
if (mLevel == LiveBucketList::kNumLevels - 1 &&
e.type() == LIVEENTRY)
{
ltx->createWithoutLoading(e.liveEntry());
}
else if (
protocolVersionIsBefore(
mMinProtocolVersionSeen,
LiveBucket::
FIRST_PROTOCOL_SUPPORTING_INITENTRY_AND_METAENTRY))
ltx->createWithoutLoading(e.liveEntry());
}
else if (protocolVersionIsBefore(
mMinProtocolVersionSeen,
LiveBucket::
FIRST_PROTOCOL_SUPPORTING_INITENTRY_AND_METAENTRY))
{
// Prior to protocol 11, INITENTRY didn't exist, so we need
// to check ltx to see if this is an update or a create
auto key = InternalLedgerEntry(e.liveEntry()).toKey();
if (ltx->getNewestVersion(key))
{
// Prior to protocol 11, INITENTRY didn't exist, so we need
// to check ltx to see if this is an update or a create
auto key = InternalLedgerEntry(e.liveEntry()).toKey();
if (ltx->getNewestVersion(key))
{
ltx->updateWithoutLoading(e.liveEntry());
}
else
{
ltx->createWithoutLoading(e.liveEntry());
}
ltx->updateWithoutLoading(e.liveEntry());
}
else
{
if (e.type() == LIVEENTRY)
{
ltx->updateWithoutLoading(e.liveEntry());
}
else
{
ltx->createWithoutLoading(e.liveEntry());
}
ltx->createWithoutLoading(e.liveEntry());
}
}
else
{
releaseAssertOrThrow(!isUsingBucketListDB);
if (protocolVersionIsBefore(
mMinProtocolVersionSeen,
LiveBucket::
FIRST_PROTOCOL_SUPPORTING_INITENTRY_AND_METAENTRY))
if (e.type() == LIVEENTRY)
{
// Prior to protocol 11, DEAD entries could exist
// without LIVE entries in between
if (ltx->getNewestVersion(e.deadEntry()))
{
ltx->eraseWithoutLoading(e.deadEntry());
}
ltx->updateWithoutLoading(e.liveEntry());
}
else
{
ltx->eraseWithoutLoading(e.deadEntry());
ltx->createWithoutLoading(e.liveEntry());
}
}

Expand Down
2 changes: 0 additions & 2 deletions src/bucket/BucketApplicator.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ class BucketApplicator
uint32_t mLevel;
LiveBucketInputIterator mBucketIter;
size_t mCount{0};
std::function<bool(LedgerEntryType)> mEntryTypeFilter;
std::unordered_set<LedgerKey>& mSeenKeys;
std::streamoff mUpperBoundOffset{0};
bool mOffersRemaining{true};
Expand Down Expand Up @@ -73,7 +72,6 @@ class BucketApplicator
BucketApplicator(Application& app, uint32_t maxProtocolVersion,
uint32_t minProtocolVersionSeen, uint32_t level,
std::shared_ptr<LiveBucket const> bucket,
std::function<bool(LedgerEntryType)> filter,
std::unordered_set<LedgerKey>& seenKeys);
operator bool() const;
size_t advance(Counters& counters);
Expand Down
3 changes: 1 addition & 2 deletions src/bucket/BucketBase.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -397,8 +397,7 @@ BucketBase::merge(BucketManager& bucketManager, uint32_t maxProtocolVersion,

MergeKey mk{keepTombstoneEntries, oldBucket->getHash(),
newBucket->getHash(), shadowHashes};
return out.getBucket(bucketManager,
bucketManager.getConfig().isUsingBucketListDB(), &mk);
return out.getBucket(bucketManager, &mk);
}

template std::shared_ptr<LiveBucket> BucketBase::merge<LiveBucket>(
Expand Down
Loading
Loading