compact一中介绍了HBASE compact的调度流程,本篇文章主要介绍实际进行compact的过程。先从上文中的chore中接入,在HRegionserver中的compactChecker chore方法中,会判断是否需要compact,如下:

protected void chore() {
//遍历instance下的所有online的region 进行循环检测
//onlineRegions是HRegionServer上存储的所有能够提供有效服务的在线Region集合;
for (HRegion r : this.instance.onlineRegions.values()) {
if (r == null)
continue;
//取出每个region的store
for (Store s : r.getStores().values()) {
try {
//检查是否需要compact的时间间隔,一般情况是在比如memstore flush后或者其他事件触发compact的,但是有时也需要不同的compact策略,
// 所以需要周期性的检查具体间隔=hbase.server.compactchecker.interval.multiplier * hbase.server.thread.wakefrequency,默认1000;
long multiplier = s.getCompactionCheckMultiplier();
assert multiplier > 0;
// 未到整数倍,跳过,每当迭代因子iteration为合并检查倍增器multiplier的整数倍时,才会发起检查
if (iteration % multiplier != 0) continue;
if (s.needsCompaction()) {//// 需要合并的话,发起SystemCompaction请求
// Queue a compaction. Will recognize if major is needed.
this.instance.compactSplitThread.requestSystemCompaction(r, s, getName()
+ " requests compaction");
} else if (s.isMajorCompaction()) {//如果是majorcompact会走 requestCompaction方法
if (majorCompactPriority == DEFAULT_PRIORITY
|| majorCompactPriority > r.getCompactPriority()) {
this.instance.compactSplitThread.requestCompaction(r, s, getName()
+ " requests major compaction; use default priority", null);
} else {
this.instance.compactSplitThread.requestCompaction(r, s, getName()
+ " requests major compaction; use configured priority",
this.majorCompactPriority, null);
}
}
} catch (IOException e) {
LOG.warn("Failed major compaction check on " + r, e);
}
}
}
iteration = (iteration == Long.MAX_VALUE) ? 0 : (iteration + 1);
}

  当判断s.needsCompaction(),则调用compactsplitThread.requstSystemCompaction()方法进行compact;如果判断此时不需要进行compact,则会调用isMajorCompaction判断是否需要进行major compact,如果是major compact会调用CompactSplitThread.requestCompaction()方法。不管是requestSystemCompaction方法也好,还是requestCompaction方法也好,最终都是调用的requestCompactionInternal方法,只是方法参数不同。下面我们从requestSystemCompaction开始继续深入了解。requestSystemCompaction的具体逻辑如下:

 public void requestSystemCompaction(
final HRegion r, final Store s, final String why) throws IOException {
requestCompactionInternal(r, s, why, Store.NO_PRIORITY, null, false);
}

  继续跟进到requestCompactionInternal方法:

private synchronized CompactionRequest requestCompactionInternal(final HRegion r, final Store s,
final String why, int priority, CompactionRequest request, boolean selectNow)
throws IOException {
//首选做一些必要的环境判断,比如HRegionServer是否已停止、HRegion对应的表是否允许Compact操作
if (this.server.isStopped()
|| (r.getTableDesc() != null && !r.getTableDesc().isCompactionEnabled())) {
return null;
} CompactionContext compaction = null;
//系统自动触发的system compaction,selectNow参数为false,如果是hbase shell等人为触发的合并,则selectNow为true
if (selectNow) {
// 通过hbase shell触发的major compaction,selectNow为true.这里进行实际的选取待合并文件操作
compaction = selectCompaction(r, s, priority, request);
if (compaction == null) return null; // message logged inside
} // We assume that most compactions are small. So, put system compactions into small
// pool; we will do selection there, and move to large pool if necessary.
// 我们假设大部分合并都是small。所以,将系统引发的合并放进small pool,
// 在那里我们会做出选择,如果有必要的话会挪至large pool
// 也就是说,如果selectNow为false,即系统自身引发的合并,比如MemStore flush、compact检查线程等,统一放入到shortCompactions中,即small pool
// 而如果是人为触发的,比如HBase shell,则还要看HStore中合并请求大小是否超过阈值,超过则放入longCompactions,即large pool,否则还是small pool //size为compact的所有hfile文件总大小
long size = selectNow ? compaction.getRequest().getSize() : 0;
ThreadPoolExecutor pool = (!selectNow && s.throttleCompaction(size))
? longCompactions : shortCompactions;
pool.execute(new CompactionRunner(s, r, compaction, pool));
if (LOG.isDebugEnabled()) {
String type = (pool == shortCompactions) ? "Small " : "Large ";
LOG.debug(type + "Compaction requested: " + (selectNow ? compaction.toString() : "system")
+ (why != null && !why.isEmpty() ? "; Because: " + why : "") + "; " + this);
}
return selectNow ? compaction.getRequest() : null;
}

  在requestCompactionInternal方法中,逻辑过程总结如下:

  1. 首先做一下check,比如判断当前的regionserver是否stop,如果stop了,则直接return
  2. 判断selectNow参数是否生效。该参数是判断这次的compact是由于人为触发还是系统自动触发,如果系统自动触发为false,比如此处。如果人为触发则为true。
    1. 如果人为触发,则通过selectCompaction方法,选择出实际需要进行compact的文件。
  3. 根据size以及selectNow判断出当前compact所需要的是哪个线程池,longcompaction还是shortcompaction.
  4. 构造CompactionRunner丢到线程池中运行。
    1. 这里CompactionRunner中的compaction是null,因为在判断selectNow时,是系统自动执行,值为false,所以不会调用selectCompaction方法为其赋值。
    2. 所以,会在CompactionRunner中的run方法中有个判断逻辑,重新选择出进行compact的files

  CompactionRunner的详细流程如下:

private class CompactionRunner implements Runnable, Comparable<CompactionRunner> {
private final Store store;
private final HRegion region;
private CompactionContext compaction;
private int queuedPriority;
private ThreadPoolExecutor parent; public CompactionRunner(Store store, HRegion region,
CompactionContext compaction, ThreadPoolExecutor parent) {
super();
this.store = store;
this.region = region;
this.compaction = compaction;
// 合并排队的优先级,如果合并上下文compaction为空,则通过HStore的getCompactPriority()方法获取,否则直接从合并请求中获取,
// 而合并请求中的,实际上也是通过调用requestCompactionInternal()方法的priority传入的
this.queuedPriority = (this.compaction == null)
? store.getCompactPriority() : compaction.getRequest().getPriority();
this.parent = parent;
} @Override
public String toString() {
return (this.compaction != null) ? ("Request = " + compaction.getRequest())
: ("Store = " + store.toString() + ", pri = " + queuedPriority);
} @Override
public void run() {
Preconditions.checkNotNull(server);
// 首选做一些必要的环境判断,比如HRegionServer是否已停止、HRegion对应的表是否允许Compact操作 if (server.isStopped()
|| (region.getTableDesc() != null && !region.getTableDesc().isCompactionEnabled())) {
return;
}
// Common case - system compaction without a file selection. Select now.
// 常见的,系统合并还没有选择待合并的文件。现在选择下。 if (this.compaction == null) {
int oldPriority = this.queuedPriority;
this.queuedPriority = this.store.getCompactPriority();
// 如果当前优先级queuedPriority大于之前的oldPriority if (this.queuedPriority > oldPriority) {
// Store priority decreased while we were in queue (due to some other compaction?),
// requeue with new priority to avoid blocking potential higher priorities.
// 将该CompactionRunner在扔回线程池 this.parent.execute(this);
return;
}
try {
//选择候选hfile
this.compaction = selectCompaction(this.region, this.store, queuedPriority, null);
} catch (IOException ex) {
LOG.error("Compaction selection failed " + this, ex);
server.checkFileSystem();
return;
}
if (this.compaction == null) return; // nothing to do
// Now see if we are in correct pool for the size; if not, go to the correct one.
// We might end up waiting for a while, so cancel the selection.
assert this.compaction.hasSelection();
ThreadPoolExecutor pool = store.throttleCompaction(
compaction.getRequest().getSize()) ? longCompactions : shortCompactions;
if (this.parent != pool) {// 换池了
this.store.cancelRequestedCompaction(this.compaction); // HStore取消合并请求 this.compaction = null; // 复位compaction为null this.parent = pool; // 换池 this.parent.execute(this); // 放入线程池,后续会再初始化compaction return;
}
}
// Finally we can compact something.
assert this.compaction != null;
// 执行之前 this.compaction.getRequest().beforeExecute();
try {
// Note: please don't put single-compaction logic here;
// put it into region/store/etc. This is CST logic.
long start = EnvironmentEdgeManager.currentTime(); // 调用HRegion的compact,针对store执行compact boolean completed =
region.compact(compaction, store, compactionThroughputController);
long now = EnvironmentEdgeManager.currentTime();
LOG.info(((completed) ? "Completed" : "Aborted") + " compaction: " +
this + "; duration=" + StringUtils.formatTimeDiff(now, start));
if (completed) {
// degenerate case: blocked regions require recursive enqueues
if (store.getCompactPriority() <= 0) {
// 如果优先级Priority小于等于0,意味着当前文件已经太多,则需要发起一次SystemCompaction requestSystemCompaction(region, store, "Recursive enqueue");
} else {
// 请求分裂,实际上是看Region的大小是否超过阈值,从而引起分裂 // see if the compaction has caused us to exceed max region size
requestSplit(region);
}
}
} catch (IOException ex) {
IOException remoteEx = RemoteExceptionHandler.checkIOException(ex);
LOG.error("Compaction failed " + this, remoteEx);
if (remoteEx != ex) {
LOG.info("Compaction failed at original callstack: " + formatStackTrace(ex));
}
server.checkFileSystem();
} catch (Exception ex) {
LOG.error("Compaction failed " + this, ex);
server.checkFileSystem();
} finally {
LOG.debug("CompactSplitThread Status: " + CompactSplitThread.this);
}
this.compaction.getRequest().afterExecute();
}

  如上所示,在CompactRunner中:

  1. 如果参数compaction为空
    1. 判断优先级是否有变动,如果优先级有变动,则将该CompactRunner再扔会线程池
    2. 调用selectCompaction方法,选择候选的hfile
    3. 通过store.throttleCOmpaction判断用哪个线程池,如果换池子了,则cancel compact,然后重新丢回线程池,后续会再进行初始化
  2. 调用compaction.beforeExecute()方法,做一些compact之前的操作:目前默认是一个空方法,不会做任何处理,如果加了coprocessor,则会执行相应的hook
  3. 获取starttime
  4. 调用region的compact方法。针对store进行compact
  5. 根据compacted结果,如果compact成功,根据compact后的优先级判断是否继续执行一次compact或者执行split操作。
  6. 执行compaction.afterExecute方法。

  接下来我们挨个看每个阶段具体做了啥,首先是selectCompaction方法。该方法选取要进行compact的file,并构造一个compactionContext对象返回,具体逻辑如下:

private CompactionContext selectCompaction(final HRegion r, final Store s,
int priority, CompactionRequest request) throws IOException { // 调用HStore的requestCompaction()方法,获取CompactionContext
CompactionContext compaction = s.requestCompaction(priority, request);
if (compaction == null) {
if(LOG.isDebugEnabled()) {
LOG.debug("Not compacting " + r.getRegionNameAsString() +
" because compaction request was cancelled");
}
return null;
}
// 确保CompactionContext中合并请求request不为空 assert compaction.hasSelection();
if (priority != Store.NO_PRIORITY) {
compaction.getRequest().setPriority(priority);
}
return compaction;
}

  可见,最终是调用store的requestCompaction方法获取compactionContext的。继续跟进到里面看一下发生了啥。

public CompactionContext requestCompaction(int priority, CompactionRequest baseRequest)
throws IOException {
// don't even select for compaction if writes are disabled
// 如果对应HRegion不可写,直接返回null
if (!this.areWritesEnabled()) {
return null;
} // Before we do compaction, try to get rid of unneeded files to simplify things.
// 在我们做合并之前,试着摆脱不必要的文件来简化事情 removeUnneededFiles();
// 通过存储引擎storeEngine创建合并上下文CompactionContext CompactionContext compaction = storeEngine.createCompaction();
CompactionRequest request = null;
// 加读锁 this.lock.readLock().lock();
try {
synchronized (filesCompacting) {
// First, see if coprocessor would want to override selection.
if (this.getCoprocessorHost() != null) {
// 通过CompactionContext的preSelect()方法,选择StoreFile,返回StoreFilel列表 List<StoreFile> candidatesForCoproc = compaction.preSelect(this.filesCompacting);
boolean override = this.getCoprocessorHost().preCompactSelection(
this, candidatesForCoproc, baseRequest);
if (override) {
// Coprocessor is overriding normal file selection.
compaction.forceSelect(new CompactionRequest(candidatesForCoproc));
}
} // Normal case - coprocessor is not overriding file selection. if (!compaction.hasSelection()) {// 如果合并请求为空,即不存在协处理器
// 是否为UserCompaction boolean isUserCompaction = priority == Store.PRIORITY_USER;
boolean mayUseOffPeak = offPeakHours.isOffPeakHour() &&
offPeakCompactionTracker.compareAndSet(false, true);
try {
// 调用CompactionContext的select()方法 compaction.select(this.filesCompacting, isUserCompaction,
mayUseOffPeak, forceMajor && filesCompacting.isEmpty());
} catch (IOException e) {
if (mayUseOffPeak) {
offPeakCompactionTracker.set(false);
}
throw e;
}
assert compaction.hasSelection();
if (mayUseOffPeak && !compaction.getRequest().isOffPeak()) {
// Compaction policy doesn't want to take advantage of off-peak.
offPeakCompactionTracker.set(false);
}
}
if (this.getCoprocessorHost() != null) {
this.getCoprocessorHost().postCompactSelection(
this, ImmutableList.copyOf(compaction.getRequest().getFiles()), baseRequest);
} // Selected files; see if we have a compaction with some custom base request.
// 如果之前传入的请求不为空,则合并之
if (baseRequest != null) {
// Update the request with what the system thinks the request should be;
// its up to the request if it wants to listen.
compaction.forceSelect(
baseRequest.combineWith(compaction.getRequest()));
}
// Finally, we have the resulting files list. Check if we have any files at all.
// 获取合并请求request
request = compaction.getRequest();
// 从合并请求request中获取待合并文件集合selectedFiles
final Collection<StoreFile> selectedFiles = request.getFiles();
if (selectedFiles.isEmpty()) {
return null;
}
// 将选择的文件集合加入到filesCompacting中,解答了之前文章的疑问 addToCompactingFiles(selectedFiles);
// 是否为major合并 // If we're enqueuing a major, clear the force flag.
this.forceMajor = this.forceMajor && !request.isMajor(); // Set common request properties.
// Set priority, either override value supplied by caller or from store.
request.setPriority((priority != Store.NO_PRIORITY) ? priority : getCompactPriority());
request.setDescription(getRegionInfo().getRegionNameAsString(), getColumnFamilyName());
}
} finally {
this.lock.readLock().unlock();
} LOG.debug(getRegionInfo().getEncodedName() + " - " + getColumnFamilyName()
+ ": Initiating " + (request.isMajor() ? "major" : "minor") + " compaction"
+ (request.isAllFiles() ? " (all files)" : ""));
// 调用HRegion的reportCompactionRequestStart()方法,汇报一个compact请求开始 this.region.reportCompactionRequestStart(request.isMajor());
// 返回合并上下文compaction return compaction;
}

  我们总结一下上面流程的逻辑过程。

  1. 首先试着摆脱掉不必要的文件简化流程:removeUnneededFiles
  2. 通过storeEngine createCompaction()
  3. 调用compactContext.select方法选择文件
  4. 将选出的file加入到compactContext中并返回

  下面先看下removeUnneededFiles方法,其主要是根据file的最大时间戳排除一些没必要的文件,将已经expired的file加入到compactingfiles中:

private void removeUnneededFiles() throws IOException {
if (!conf.getBoolean("hbase.store.delete.expired.storefile", true)) return;
if (getFamily().getMinVersions() > 0) {
LOG.debug("Skipping expired store file removal due to min version being " +
getFamily().getMinVersions());
return;
}
this.lock.readLock().lock();
Collection<StoreFile> delSfs = null;
try {
synchronized (filesCompacting) {
//获取设置的ttl时间,如果没设置,默认为long.maxnium
long cfTtl = getStoreFileTtl();
if (cfTtl != Long.MAX_VALUE) {//如果不是forever
//最终调用getUnneededFiles
delSfs = storeEngine.getStoreFileManager().getUnneededFiles(
EnvironmentEdgeManager.currentTime() - cfTtl, filesCompacting);
//将unneede之后的file加入到compactingfiles中
addToCompactingFiles(delSfs);
}
}
} finally {
this.lock.readLock().unlock();
}
if (delSfs == null || delSfs.isEmpty()) return; Collection<StoreFile> newFiles = new ArrayList<StoreFile>(); // No new files.
writeCompactionWalRecord(delSfs, newFiles);
replaceStoreFiles(delSfs, newFiles);
completeCompaction(delSfs);
LOG.info("Completed removal of " + delSfs.size() + " unnecessary (expired) file(s) in "
+ this + " of " + this.getRegionInfo().getRegionNameAsString()
+ "; total size for store is " + TraditionalBinaryPrefix.long2String(storeSize, "", 1));
}

  getUnneededFiles方法逻辑如下

public Collection<StoreFile> getUnneededFiles(long maxTs, List<StoreFile> filesCompacting) {
Collection<StoreFile> expiredStoreFiles = null;
ImmutableList<StoreFile> files = storefiles;
// 1) We can never get rid of the last file which has the maximum seqid.
// 2) Files that are not the latest can't become one due to (1), so the rest are fair game.
  
for (int i = 0; i < files.size() - 1; ++i) {
StoreFile sf = files.get(i);
long fileTs = sf.getReader().getMaxTimestamp();
//如果文件的最大时间戳小于设置的ttl大小且不在compactingfile中
if (fileTs < maxTs && !filesCompacting.contains(sf)) {
LOG.info("Found an expired store file: " + sf.getPath()
+ " whose maxTimeStamp is " + fileTs + ", which is below " + maxTs);
if (expiredStoreFiles == null) {
expiredStoreFiles = new ArrayList<StoreFile>();
}
expiredStoreFiles.add(sf);
}
}
//返回需要排除的文件列表
return expiredStoreFiles;
}  

走下去可见是调用compactionContext的select方法进行文件的选取

public boolean select(List<StoreFile> filesCompacting, boolean isUserCompaction,
boolean mayUseOffPeak, boolean forceMajor) throws IOException { // 利用合并策略compactionPolicy的selectCompaction()方法,获取合并请求request request = compactionPolicy.selectCompaction(storeFileManager.getStorefiles(),
filesCompacting, isUserCompaction, mayUseOffPeak, forceMajor); // 返回是否得到request的标志,true or false return request != null;
}

可见,select中,根据指定的compactpolicy策略进行selectCompaction,选取文件。我们的线上环境没有指定,则采用的是default的ratio,如下:

public CompactionRequest selectCompaction(Collection<StoreFile> candidateFiles,
final List<StoreFile> filesCompacting, final boolean isUserCompaction,
final boolean mayUseOffPeak, final boolean forceMajor) throws IOException {
// Preliminary compaction subject to filters
// 初步压缩过滤器,即根据传入的参数candidateFiles,创建一个候选的StoreFile列表 ArrayList<StoreFile> candidateSelection = new ArrayList<StoreFile>(candidateFiles);
// Stuck and not compacting enough (estimate). It is not guaranteed that we will be
// able to compact more if stuck and compacting, because ratio policy excludes some
// non-compacting files from consideration during compaction (see getCurrentEligibleFiles).
// 确定futureFiles,如果filesCompacting为空则为0,否则为1 int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
//根据blockingstorefiles配置,判断是否阻塞
boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() + futureFiles)
>= storeConfigInfo.getBlockingFileCount();
// 从候选列表candidateSelection中排除正在合并的文件,即filesCompacting中的文件
candidateSelection = getCurrentEligibleFiles(candidateSelection, filesCompacting);
LOG.debug("Selecting compaction from " + candidateFiles.size() + " store files, " +
filesCompacting.size() + " compacting, " + candidateSelection.size() +
" eligible, " + storeConfigInfo.getBlockingFileCount() + " blocking"); // If we can't have all files, we cannot do major anyway
// 验证是否包含所有文件,设置标志位isAllFiles,判断的条件就是此时的候选列表candidateSelection大小是否等于初始的candidateFiles列表大小,
// 而candidateFiles代表了Store下的全部文件 boolean isAllFiles = candidateFiles.size() == candidateSelection.size();
// 如果没有包含所有文件,则不可能为一个Major合并 if (!(forceMajor && isAllFiles)) {
// 如果不是强制的Major合并,且不包含所有的文件,则调用skipLargeFiles()方法,跳过较大文件 candidateSelection = skipLargeFiles(candidateSelection);
// 再次确定标志位isAllFiles isAllFiles = candidateFiles.size() == candidateSelection.size();
} // Try a major compaction if this is a user-requested major compaction,
// or if we do not have too many files to compact and this was requested as a major compaction
// 确定isTryingMajor,共三种情况: // 1、强制Major合并为true,且包含所有问文件,且是一个用户合并
// 2、强制Major合并,且包含所有问文件,或者本身判断后就是一个Major合并,同时,必须是candidateSelection的数目小于配置的达到合并条件的最大文件数目
boolean isTryingMajor = (forceMajor && isAllFiles && isUserCompaction)
|| (((forceMajor && isAllFiles) || isMajorCompaction(candidateSelection))
&& (candidateSelection.size() < comConf.getMaxFilesToCompact()));
// Or, if there are any references among the candidates.
// candidates中存在引用的话,则视为是在分裂后的文件 boolean isAfterSplit = StoreUtils.hasReferences(candidateSelection);
// 如果不是TryingMajor,且不是在分裂后 if (!isTryingMajor && !isAfterSplit) {
// We're are not compacting all files, let's see what files are applicable
// 再次筛选文件
//通过filterBulk()方法取出不应该位于Minor合并的文件;
candidateSelection = filterBulk(candidateSelection);
// 通过applyCompactionPolicy()方法,使用一定的算法,进行文件的筛选;
candidateSelection = applyCompactionPolicy(candidateSelection, mayUseOffPeak, mayBeStuck);
//通过checkMinFilesCriteria()方法,判断是否满足合并时最小文件数的要求;
candidateSelection = checkMinFilesCriteria(candidateSelection);
}
// candidateSelection中移除过量的文件
candidateSelection = removeExcessFiles(candidateSelection, isUserCompaction, isTryingMajor);
// Now we have the final file list, so we can determine if we can do major/all files.
// 查看是否为全部文件 isAllFiles = (candidateFiles.size() == candidateSelection.size());
// 利用candidateSelection构造合并请求CompactionRequest对象result CompactionRequest result = new CompactionRequest(candidateSelection);
result.setOffPeak(!candidateSelection.isEmpty() && !isAllFiles && mayUseOffPeak);
result.setIsMajor(isTryingMajor && isAllFiles, isAllFiles);
return result;
}

  其中最主要的逻辑在filterbulk、applyCOmpactPolicy、checkMinFilesCriteria中,下面依次介绍。

 private ArrayList<StoreFile> filterBulk(ArrayList<StoreFile> candidates) {
candidates.removeAll(Collections2.filter(candidates,
new Predicate<StoreFile>() {
@Override
public boolean apply(StoreFile input) {
return input.excludeFromMinorCompaction();
}
}));
return candidates;
}

  在filterbulk中主要是通过hfile的fileinfo字段判断,是否将其排除在mincompact之外。

  重要的是applyCompactionPolicy方法,该方法具体逻辑如下:

ArrayList<StoreFile> applyCompactionPolicy(ArrayList<StoreFile> candidates,
boolean mayUseOffPeak, boolean mayBeStuck) throws IOException {
if (candidates.isEmpty()) {
return candidates;
} // we're doing a minor compaction, let's see what files are applicable
int start = 0;
// 获取文件合并比例:取参数hbase.hstore.compaction.ratio,默认为1.2 double ratio = comConf.getCompactionRatio();
if (mayUseOffPeak) {
// 取参数hbase.hstore.compaction.ratio.offpeak,默认为5.0 ratio = comConf.getCompactionRatioOffPeak();
LOG.info("Running an off-peak compaction, selection ratio = " + ratio);
} // get store file sizes for incremental compacting selection.
final int countOfFiles = candidates.size();
long[] fileSizes = new long[countOfFiles];
long[] sumSize = new long[countOfFiles];
for (int i = countOfFiles - 1; i >= 0; --i) {
StoreFile file = candidates.get(i);
fileSizes[i] = file.getReader().length();
// calculate the sum of fileSizes[i,i+maxFilesToCompact-1) for algo
// tooFar表示后移动最大文件数位置的文件大小,其实也就是刚刚满足达到最大文件数位置的那个文件,
// 也就是说,从i至tooFar数目为合并时允许的最大文件数 int tooFar = i + comConf.getMaxFilesToCompact() - 1;
sumSize[i] = fileSizes[i]
+ ((i + 1 < countOfFiles) ? sumSize[i + 1] : 0)
- ((tooFar < countOfFiles) ? fileSizes[tooFar] : 0);
} // 倒序循环,如果文件数目满足最小合并时允许的最小文件数,且该位置的文件大小,
// 大于合并时允许的文件最小大小与下一个文件窗口文件总大小乘以一定比例中的较大者,则继续,
// 实际上就是选择出一个文件窗口内能最小能满足的文件大小的一组文件
while (countOfFiles - start >= comConf.getMinFilesToCompact() &&
fileSizes[start] > Math.max(comConf.getMinCompactSize(),
(long) (sumSize[start + 1] * ratio))) {
++start;
}
if (start < countOfFiles) {
LOG.info("Default compaction algorithm has selected " + (countOfFiles - start)
+ " files from " + countOfFiles + " candidates");
} else if (mayBeStuck) {
// We may be stuck. Compact the latest files if we can.
// 保证最小文件数目的要求 int filesToLeave = candidates.size() - comConf.getMinFilesToCompact();
if (filesToLeave >= 0) {
start = filesToLeave;
}
}
candidates.subList(0, start).clear();
return candidates;
}

上述过程可以参照ratioCompactionPolicy策略,应该有大量文章介绍,此处不再详细介绍其过程

下面是checkMinFilesCriteria方法,判断applyCompactionPolicy策略选出的file是否满足合并时的最小文件数要求。如果不满足要求,则直接清空candidates。

  private ArrayList<StoreFile> checkMinFilesCriteria(ArrayList<StoreFile> candidates) {
int minFiles = comConf.getMinFilesToCompact();
if (candidates.size() < minFiles) {
if(LOG.isDebugEnabled()) {
LOG.debug("Not compacting files because we only have " + candidates.size() +
" files ready for compaction. Need " + minFiles + " to initiate.");
}
candidates.clear();
}
return candidates;
}

  选出candidatesfile后,需要通过removeExcessFiles方法判断选出的文件数是否大于了配置中的compact.files.max参数的值。如果超过,则删除值满足配置要求。

  最后根据candidatesfiles构造compactionRequest

  说了这么多,都是CompactRunner run方法中的selectCompaction部分,下面是真正的执行compact的环节,该环节是通过region.compact方法执行。

public boolean compact(CompactionContext compaction, Store store,
CompactionThroughputController throughputController) throws IOException {
assert compaction != null && compaction.hasSelection();
assert !compaction.getRequest().getFiles().isEmpty();
//如果这个region正在执行close操作或者已经closed,则取消compact
if (this.closing.get() || this.closed.get()) {
LOG.debug("Skipping compaction on " + this + " because closing/closed");
store.cancelRequestedCompaction(compaction);
return false;
}
MonitoredTask status = null;
boolean requestNeedsCancellation = true;
// block waiting for the lock for compaction
lock.readLock().lock();
try {
byte[] cf = Bytes.toBytes(store.getColumnFamilyName());
//执行一系列检查
if (stores.get(cf) != store) {
LOG.warn("Store " + store.getColumnFamilyName() + " on region " + this
+ " has been re-instantiated, cancel this compaction request. "
+ " It may be caused by the roll back of split transaction");
return false;
} status = TaskMonitor.get().createStatus("Compacting " + store + " in " + this);
if (this.closed.get()) {
String msg = "Skipping compaction on " + this + " because closed";
LOG.debug(msg);
status.abort(msg);
return false;
}
boolean wasStateSet = false;
try {
synchronized (writestate) {
if (writestate.writesEnabled) {//该状态不准读,默认是readonly为false writeEnabled为true
//将writestate的compacting值加一
wasStateSet = true;
++writestate.compacting;
} else {
String msg = "NOT compacting region " + this + ". Writes disabled.";
LOG.info(msg);
status.abort(msg);
return false;
}
}
LOG.info("Starting compaction on " + store + " in region " + this
+ (compaction.getRequest().isOffPeak()?" as an off-peak compaction":""));
doRegionCompactionPrep();
try {
status.setStatus("Compacting store " + store);
// We no longer need to cancel the request on the way out of this
// method because Store#compact will clean up unconditionally
requestNeedsCancellation = false;
//最终调用store的compact方法进行compact
store.compact(compaction, throughputController);
} catch (InterruptedIOException iioe) {
String msg = "compaction interrupted";
LOG.info(msg, iioe);
status.abort(msg);
return false;
}
} finally {
if (wasStateSet) {
synchronized (writestate) {
--writestate.compacting;
if (writestate.compacting <= 0) {
writestate.notifyAll();
}
}
}
}
status.markComplete("Compaction complete");
return true;
} finally {
try {
if (requestNeedsCancellation) store.cancelRequestedCompaction(compaction);
if (status != null) status.cleanup();
} finally {
lock.readLock().unlock();
}
}
}

  下面是store.compact方法,该方法需要花费一定的时间,里面调用的是compactContext的compact方法,里面又是调用的compactor执行compact。具体逻辑待续

最新文章

  1. 在Oracle Linux Server release 6.4下配置ocfs2文件系统
  2. PLSQL_PLSQL调优健康检查脚本SQLHC(案例)
  3. CoordinatorLayout的简单应用(材料设计新控件)
  4. OC 成员变量作用域
  5. Icon specified in the Info.plist not found under the top level app wrapper: Icon.png
  6. Docker 容器
  7. 简单几步让网站支持https,windows iis下https配置方式
  8. rsyslog+loganalyzer配置
  9. js-当前时间转换
  10. NumPy 中的集合运算
  11. 老生常谈:Windows的7类安全漏洞
  12. 学习 Spring (十四) Introduction
  13. mysql distinct 去重
  14. javascript加密PHP解密---jsencrypt
  15. Tomcat增加虚拟内存(转)
  16. day_6.7 py tcp
  17. Javascript中的 “&amp;” 和 “|” 详解
  18. 20165207 Exp1 PC平台逆向破解
  19. C++ STL标准模板库(list)
  20. Linux 下安装 jdk-7u75-linux-x64.gz,jdk1.7.0_75,jdk1.7步骤:

热门文章

  1. Java获取客户端真实IP地址
  2. vuePress自动部署到Github Page脚本踩坑
  3. opencv::GMM(高斯混合模型)
  4. c# 读取txt文件中文乱码解决方法
  5. 车间如何数字化?MES系统来助力
  6. Centos7部署分布式文件存储(Fastdfs)
  7. LearnOpenGL.PBR.IBL
  8. 201871010131-张兴盼《面向对象程序设计(java)》第十一周学习总结
  9. 错误: 找不到或无法加载主类 com.leyou.LeyouItemApplication Process finished with exit code 1
  10. Revit 2019 下载链接