当前位置: 首页 > news >正文

GATK ReadsPathDataSource类介绍

GATK(Genome Analysis Toolkit)是一个广泛使用的基因组分析工具包,它的核心库之一是htsjdk,用于处理高通量测序数据。在GATK中,ReadsPathDataSource类是负责管理和提供读取高通量测序数据文件(如BAM、SAM、CRAM)的类。

常见使用场景

  • 数据加载:在GATK的基因组分析工具链中,ReadsPathDataSource 经常被用来从指定路径加载测序数据。
  • 数据过滤:通过 ReadsPathDataSource,可以方便地在加载数据的同时进行预过滤,如按特定标准选择感兴趣的序列记录。
  • 多文件支持:支持同时从多个文件中加载数据,使得分析多个样本的数据更加便捷。

类关系

ReadsPathDataSource源码

package org.broadinstitute.hellbender.engine;import com.google.common.annotations.VisibleForTesting;
import htsjdk.samtools.MergingSamRecordIterator;
import htsjdk.samtools.SAMException;
import htsjdk.samtools.SAMFileHeader;
import htsjdk.samtools.SAMRecord;
import htsjdk.samtools.SAMSequenceDictionary;
import htsjdk.samtools.SamFileHeaderMerger;
import htsjdk.samtools.SamInputResource;
import htsjdk.samtools.SamReader;
import htsjdk.samtools.SamReaderFactory;
import htsjdk.samtools.util.CloseableIterator;
import htsjdk.samtools.util.IOUtil;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.broadinstitute.hellbender.exceptions.GATKException;
import org.broadinstitute.hellbender.exceptions.UserException;
import org.broadinstitute.hellbender.utils.IntervalUtils;
import org.broadinstitute.hellbender.utils.SimpleInterval;
import org.broadinstitute.hellbender.utils.Utils;
import org.broadinstitute.hellbender.utils.gcs.BucketUtils;
import org.broadinstitute.hellbender.utils.iterators.SAMRecordToReadIterator;
import org.broadinstitute.hellbender.utils.iterators.SamReaderQueryingIterator;
import org.broadinstitute.hellbender.utils.read.GATKRead;
import org.broadinstitute.hellbender.utils.read.ReadConstants;import java.io.IOException;
import java.nio.channels.SeekableByteChannel;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.Function;
import java.util.stream.Collectors;/*** Manages traversals and queries over sources of reads which are accessible via {@link Path}s* (for now, SAM/BAM/CRAM files only).** Two basic operations are available:** -Iteration over all reads, optionally restricted to reads that overlap a set of intervals* -Targeted queries by one interval at a time*/
public final class ReadsPathDataSource implements ReadsDataSource {private static final Logger logger = LogManager.getLogger(ReadsPathDataSource.class);/*** Mapping from SamReaders to iterators over the reads from each reader. Only one* iterator can be open from a given reader at a time (this is a restriction* in htsjdk). Iterator is set to null for a reader if no iteration is currently* active on that reader.*/private final Map<SamReader, CloseableIterator<SAMRecord>> readers;/*** Hang onto the input files so that we can print useful errors about them*/private final Map<SamReader, Path> backingPaths;/*** Only reads that overlap these intervals (and unmapped reads, if {@link #traverseUnmapped} is set) will be returned* during a full iteration. Null if iteration is unbounded.** Individual queries are unaffected by these intervals -- only traversals initiated via {@link #iterator} are affected.*/private List<SimpleInterval> intervalsForTraversal;/*** If true, restrict traversals to unmapped reads (and reads overlapping any {@link #intervalsForTraversal}, if set).* False if iteration is unbounded or bounded only by our {@link #intervalsForTraversal}.** Note that this setting covers only unmapped reads that have no position -- unmapped reads that are assigned the* position of their mates will be returned by queries overlapping that position.** Individual queries are unaffected by this setting  -- only traversals initiated via {@link #iterator} are affected.*/private boolean traverseUnmapped;/*** Used to create a merged Sam header when we're dealing with multiple readers. Null if we only have a single reader.*/private final SamFileHeaderMerger headerMerger;/*** Are indices available for all files?*/private boolean indicesAvailable;/*** Has it been closed already.*/private boolean isClosed;/*** Initialize this data source with a single SAM/BAM file and validation stringency SILENT.** @param samFile SAM/BAM file, not null.*/public ReadsPathDataSource( final Path samFile ) {this(samFile != null ? Arrays.asList(samFile) : null, (SamReaderFactory)null);}/*** Initialize this data source with multiple SAM/BAM files and validation stringency SILENT.** @param samFiles SAM/BAM files, not null.*/public ReadsPathDataSource( final List<Path> samFiles ) {this(samFiles, (SamReaderFactory)null);}/*** Initialize this data source with a single SAM/BAM file and a custom SamReaderFactory** @param samPath path to SAM/BAM file, not null.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.*/public ReadsPathDataSource( final Path samPath, SamReaderFactory customSamReaderFactory ) {this(samPath != null ? Arrays.asList(samPath) : null, customSamReaderFactory);}/*** Initialize this data source with multiple SAM/BAM files and a custom SamReaderFactory** @param samPaths path to SAM/BAM file, not null.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.*/public ReadsPathDataSource( final List<Path> samPaths, SamReaderFactory customSamReaderFactory ) {this(samPaths, null, customSamReaderFactory, 0, 0);}/*** Initialize this data source with multiple SAM/BAM/CRAM files, and explicit indices for those files.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices ) {this(samPaths, samIndices, null, 0, 0);}/*** Initialize this data source with multiple SAM/BAM/CRAM files, explicit indices for those files,* and a custom SamReaderFactory.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices,SamReaderFactory customSamReaderFactory ) {this(samPaths, samIndices, customSamReaderFactory, 0, 0);}/*** Initialize this data source with multiple SAM/BAM/CRAM files, explicit indices for those files,* and a custom SamReaderFactory.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.* @param cloudPrefetchBuffer MB size of caching/prefetching wrapper for the data, if on Google Cloud (0 to disable).* @param cloudIndexPrefetchBuffer MB size of caching/prefetching wrapper for the index, if on Google Cloud (0 to disable).*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices,SamReaderFactory customSamReaderFactory,int cloudPrefetchBuffer, int cloudIndexPrefetchBuffer) {this(samPaths, samIndices, customSamReaderFactory,BucketUtils.getPrefetchingWrapper(cloudPrefetchBuffer),BucketUtils.getPrefetchingWrapper(cloudIndexPrefetchBuffer) );}/*** Initialize this data source with multiple SAM/BAM/CRAM files, explicit indices for those files,* and a custom SamReaderFactory.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.* @param cloudWrapper caching/prefetching wrapper for the data, if on Google Cloud.* @param cloudIndexWrapper caching/prefetching wrapper for the index, if on Google Cloud.*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices,SamReaderFactory customSamReaderFactory,Function<SeekableByteChannel, SeekableByteChannel> cloudWrapper,Function<SeekableByteChannel, SeekableByteChannel> cloudIndexWrapper ) {Utils.nonNull(samPaths);Utils.nonEmpty(samPaths, "ReadsPathDataSource cannot be created from empty file list");if ( samIndices != null && samPaths.size() != samIndices.size() ) {throw new UserException(String.format("Must have the same number of BAM/CRAM/SAM paths and indices. Saw %d BAM/CRAM/SAMs but %d indices",samPaths.size(), samIndices.size()));}readers = new LinkedHashMap<>(samPaths.size() * 2);backingPaths = new LinkedHashMap<>(samPaths.size() * 2);indicesAvailable = true;final SamReaderFactory samReaderFactory =customSamReaderFactory == null ?SamReaderFactory.makeDefault().validationStringency(ReadConstants.DEFAULT_READ_VALIDATION_STRINGENCY) :customSamReaderFactory;int samCount = 0;for ( final Path samPath : samPaths ) {// Ensure each file can be readtry {IOUtil.assertFileIsReadable(samPath);}catch ( SAMException|IllegalArgumentException e ) {throw new UserException.CouldNotReadInputFile(samPath.toString(), e);}Function<SeekableByteChannel, SeekableByteChannel> wrapper =(BucketUtils.isEligibleForPrefetching(samPath)? cloudWrapper: Function.identity());// if samIndices==null then we'll guess the index name from the file name.// If the file's on the cloud, then the search will only consider locations that are also// in the cloud.Function<SeekableByteChannel, SeekableByteChannel> indexWrapper =((samIndices != null && BucketUtils.isEligibleForPrefetching(samIndices.get(samCount))|| (samIndices == null && BucketUtils.isEligibleForPrefetching(samPath)))? cloudIndexWrapper: Function.identity());SamReader reader;if ( samIndices == null ) {reader = samReaderFactory.open(samPath, wrapper, indexWrapper);}else {final SamInputResource samResource = SamInputResource.of(samPath, wrapper);Path indexPath = samIndices.get(samCount);samResource.index(indexPath, indexWrapper);reader = samReaderFactory.open(samResource);}// Ensure that each file has an indexif ( ! reader.hasIndex() ) {indicesAvailable = false;}readers.put(reader, null);backingPaths.put(reader, samPath);++samCount;}// Prepare a header merger only if we have multiple readersheaderMerger = samPaths.size() > 1 ? createHeaderMerger() : null;}/*** Are indices available for all files?*/public boolean indicesAvailable() {return indicesAvailable;}/*** @return true if indices are available for all inputs.* This is identical to {@link #indicesAvailable}*/@Overridepublic boolean isQueryableByInterval() {return indicesAvailable();}/*** Restricts a traversal of this data source via {@link #iterator} to only return reads that overlap the given intervals,* and to unmapped reads if specified.** Calls to {@link #query} are not affected by this method.** @param intervals Our next full traversal will return reads overlapping these intervals* @param traverseUnmapped Our next full traversal will return unmapped reads (this affects only unmapped reads that*                         have no position -- unmapped reads that have the position of their mapped mates will be*                         included if the interval overlapping that position is included).*/@Overridepublic void setTraversalBounds( final List<SimpleInterval> intervals, final boolean traverseUnmapped ) {// Set intervalsForTraversal to null if intervals is either null or emptythis.intervalsForTraversal = intervals != null && ! intervals.isEmpty() ? intervals : null;this.traverseUnmapped = traverseUnmapped;if ( traversalIsBounded() && ! indicesAvailable ) {raiseExceptionForMissingIndex("Traversal by intervals was requested but some input files are not indexed.");}}/*** @return True if traversals initiated via {@link #iterator} will be restricted to reads that overlap intervals*         as configured via {@link #setTraversalBounds}, otherwise false*/@Overridepublic boolean traversalIsBounded() {return intervalsForTraversal != null || traverseUnmapped;}private void raiseExceptionForMissingIndex( String reason ) {String commandsToIndex = backingPaths.entrySet().stream().filter(f -> !f.getKey().hasIndex()).map(Map.Entry::getValue).map(Path::toAbsolutePath).map(f -> "samtools index " + f).collect(Collectors.joining("\n","\n","\n"));throw new UserException(reason + "\nPlease index all input files:\n" + commandsToIndex);}/*** Iterate over all reads in this data source. If intervals were provided via {@link #setTraversalBounds},* iteration is limited to reads that overlap that set of intervals.** @return An iterator over the reads in this data source, limited to reads that overlap the intervals supplied*         via {@link #setTraversalBounds} (if intervals were provided)*/@Overridepublic Iterator<GATKRead> iterator() {logger.debug("Preparing readers for traversal");return prepareIteratorsForTraversal(intervalsForTraversal, traverseUnmapped);}/*** Query reads over a specific interval. This operation is not affected by prior calls to* {@link #setTraversalBounds}** @param interval The interval over which to query* @return Iterator over reads overlapping the query interval*/@Overridepublic Iterator<GATKRead> query( final SimpleInterval interval ) {if ( ! indicesAvailable ) {raiseExceptionForMissingIndex("Cannot query reads data source by interval unless all files are indexed");}return prepareIteratorsForTraversal(Arrays.asList(interval));}/*** @return An iterator over just the unmapped reads with no assigned position. This operation is not affected*         by prior calls to {@link #setTraversalBounds}. The underlying file must be indexed.*/@Overridepublic Iterator<GATKRead> queryUnmapped() {if ( ! indicesAvailable ) {raiseExceptionForMissingIndex("Cannot query reads data source by interval unless all files are indexed");}return prepareIteratorsForTraversal(null, true);}/*** Returns the SAM header for this data source. Will be a merged header if there are multiple readers.* If there is only a single reader, returns its header directly.** @return SAM header for this data source*/@Overridepublic SAMFileHeader getHeader() {return headerMerger != null ? headerMerger.getMergedHeader() : readers.entrySet().iterator().next().getKey().getFileHeader();}/*** Prepare iterators over all readers in response to a request for a complete iteration or query** If there are multiple intervals, they must have been optimized using QueryInterval.optimizeIntervals()* before calling this method.** @param queryIntervals Intervals to bound the iteration (reads must overlap one of these intervals). If null, iteration is unbounded.* @return Iterator over all reads in this data source, limited to overlap with the supplied intervals*/private Iterator<GATKRead> prepareIteratorsForTraversal( final List<SimpleInterval> queryIntervals ) {return prepareIteratorsForTraversal(queryIntervals, false);}/*** Prepare iterators over all readers in response to a request for a complete iteration or query** @param queryIntervals Intervals to bound the iteration (reads must overlap one of these intervals). If null, iteration is unbounded.* @return Iterator over all reads in this data source, limited to overlap with the supplied intervals*/private Iterator<GATKRead> prepareIteratorsForTraversal( final List<SimpleInterval> queryIntervals, final boolean queryUnmapped ) {// htsjdk requires that only one iterator be open at a time per reader, so close out// any previous iterationsclosePreviousIterationsIfNecessary();final boolean traversalIsBounded = (queryIntervals != null && ! queryIntervals.isEmpty()) || queryUnmapped;// Set up an iterator for each reader, bounded to overlap with the supplied intervals if there are anyfor ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {if (traversalIsBounded) {readerEntry.setValue(new SamReaderQueryingIterator(readerEntry.getKey(),readers.size() > 1 ?getIntervalsOverlappingReader(readerEntry.getKey(), queryIntervals) :queryIntervals,queryUnmapped));} else {readerEntry.setValue(readerEntry.getKey().iterator());}}// Create a merging iterator over all readers if necessary. In the case where there's only a single reader,// return its iterator directly to avoid the overhead of the merging iterator.Iterator<SAMRecord> startingIterator = null;if ( readers.size() == 1 ) {startingIterator = readers.entrySet().iterator().next().getValue();}else {startingIterator = new MergingSamRecordIterator(headerMerger, readers, true);}return new SAMRecordToReadIterator(startingIterator);}/*** Reduce the intervals down to only include ones that can actually intersect with this reader*/private List<SimpleInterval> getIntervalsOverlappingReader(final SamReader samReader,final List<SimpleInterval> queryIntervals ){final SAMSequenceDictionary sequenceDictionary = samReader.getFileHeader().getSequenceDictionary();return queryIntervals.stream().filter(interval -> IntervalUtils.intervalIsOnDictionaryContig(interval, sequenceDictionary)).collect(Collectors.toList());}/*** Create a header merger from the individual SAM/BAM headers in our readers** @return a header merger containing all individual headers in this data source*/private SamFileHeaderMerger createHeaderMerger() {List<SAMFileHeader> headers = new ArrayList<>(readers.size());for ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {headers.add(readerEntry.getKey().getFileHeader());}SamFileHeaderMerger headerMerger = new SamFileHeaderMerger(identifySortOrder(headers), headers, true);return headerMerger;}@VisibleForTestingstatic SAMFileHeader.SortOrder identifySortOrder( final List<SAMFileHeader> headers ){final Set<SAMFileHeader.SortOrder> sortOrders = headers.stream().map(SAMFileHeader::getSortOrder).collect(Collectors.toSet());final SAMFileHeader.SortOrder order;if (sortOrders.size() == 1) {order = sortOrders.iterator().next();} else {order = SAMFileHeader.SortOrder.unsorted;logger.warn("Inputs have different sort orders. Assuming {} sorted reads for all of them.", order);}return order;}/*** @return true if this {@code ReadsPathDataSource} supports serial iteration (has only non-SAM inputs). If any* input has type==SAM_TYPE (is backed by a SamFileReader) this will return false, since SamFileReader* doesn't support serial iterators, and can't be serially re-traversed without re-initialization of the* underlying reader (and {@code ReadsPathDataSource}.*/public boolean supportsSerialIteration() {return !hasSAMInputs();}/*** Shut down this data source permanently, closing all iterations and readers.*/@Overridepublic void close() {if (isClosed) {return;}isClosed = true;closePreviousIterationsIfNecessary();try {for ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {readerEntry.getKey().close();}}catch ( IOException e ) {throw new GATKException("Error closing SAMReader");}}boolean isClosed() {return isClosed;}/*** Close any previously-opened iterations over our readers (htsjdk allows only one open iteration per reader).*/private void closePreviousIterationsIfNecessary() {for ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {CloseableIterator<SAMRecord> readerIterator = readerEntry.getValue();if ( readerIterator != null ) {readerIterator.close();readerEntry.setValue(null);}}}// Return true if any input is has type==SAM_TYPE (is backed by a SamFileReader) since SamFileReader// doesn't support serial iterators and can't be serially re-traversed without re-initialization of the// readerprivate boolean hasSAMInputs() {return readers.keySet().stream().anyMatch(r -> r.type().equals(SamReader.Type.SAM_TYPE));}/*** Get the sequence dictionary for this ReadsPathDataSource** @return SAMSequenceDictionary from the SAMReader backing this if there is only 1 input file, otherwise the merged SAMSequenceDictionary from the merged header*/@Overridepublic SAMSequenceDictionary getSequenceDictionary() {return getHeader().getSequenceDictionary();}}

ReadsDataSource源码

package org.broadinstitute.hellbender.engine;import htsjdk.samtools.SAMFileHeader;
import htsjdk.samtools.SAMSequenceDictionary;
import org.broadinstitute.hellbender.utils.SimpleInterval;
import org.broadinstitute.hellbender.utils.read.GATKRead;import java.util.Iterator;
import java.util.List;/**** An interface for managing traversals over sources of reads.** Two basic operations are available:** -Iteration over all reads, optionally restricted to reads that overlap a set of intervals* -Targeted queries by one interval at a time*/
public interface ReadsDataSource extends GATKDataSource<GATKRead>, AutoCloseable {/*** Restricts a traversal of this data source via {@link #iterator} to only return reads that overlap the given intervals,* and to unmapped reads if specified.** Calls to {@link #query} are not affected by this method.** @param intervals Our next full traversal will return reads overlapping these intervals* @param traverseUnmapped Our next full traversal will return unmapped reads (this affects only unmapped reads that*                         have no position -- unmapped reads that have the position of their mapped mates will be*                         included if the interval overlapping that position is included).*/void setTraversalBounds(List<SimpleInterval> intervals, boolean traverseUnmapped);/*** Restricts a traversal of this data source via {@link #iterator} to only return reads which overlap the given intervals.* Calls to {@link #query} are not affected by setting these intervals.** @param intervals Our next full traversal will return only reads overlapping these intervals*/default void setTraversalBounds(List<SimpleInterval> intervals) {setTraversalBounds(intervals, false);}/*** Restricts a traversal of this data source via {@link #iterator} to only return reads that overlap the given intervals,* and to unmapped reads if specified.** Calls to {@link #query} are not affected by this method.** @param traversalParameters set of traversal parameters to control which reads get returned by the next call*                            to {@link #iterator}*/default void setTraversalBounds(TraversalParameters traversalParameters){setTraversalBounds(traversalParameters.getIntervalsForTraversal(), traversalParameters.traverseUnmappedReads());}/*** @return true if traversals initiated via {@link #iterator} will be restricted to reads that overlap intervals*         as configured via {@link #setTraversalBounds}, otherwise false*/boolean traversalIsBounded();/*** @return true if this datasource supports the query() operation otherwise false.*/boolean isQueryableByInterval();/*** @return An iterator over just the unmapped reads with no assigned position. This operation is not affected*         by prior calls to {@link #setTraversalBounds}. The underlying file must be indexed.*/Iterator<GATKRead> queryUnmapped();/*** Returns the SAM header for this data source.** @return SAM header for this data source*/SAMFileHeader getHeader();/*** Get the sequence dictionary for this ReadsDataSource** @return SAMSequenceDictionary for the reads backing this datasource.*/default SAMSequenceDictionary getSequenceDictionary(){return getHeader().getSequenceDictionary();}/*** @return true if this {@code ReadsDataSource} supports multiple iterations over the data*/boolean supportsSerialIteration();/*** Shut down this data source permanently, closing all iterations and readers.*/@Override  //Overriden here to disallow throwing checked exceptions.void close();
}

GATKDataSource源码

package org.broadinstitute.hellbender.engine;import org.broadinstitute.hellbender.utils.SimpleInterval;import java.util.Iterator;/*** A GATKDataSource is something that can be iterated over from start to finish* and/or queried by genomic interval. It is not necessarily file-based.** @param <T> Type of data in the data source*/
public interface GATKDataSource<T> extends Iterable<T> {Iterator<T> query(final SimpleInterval interval);
}

相关文章:

  • 北京网站建设多少钱?
  • 辽宁网页制作哪家好_网站建设
  • 高端品牌网站建设_汉中网站制作
  • Docker绑定挂载使用手册
  • 数据结构系列-归并排序
  • 网络安全售前入门01——产品了解
  • 【Tools】区块链技术有哪些应用场景
  • NLP -->定义、应用、与职业前景解析
  • 代码随想录算法训练营第16天 | 第六章 二叉树 part06
  • macOS symbol(s) not found for architecture arm64错误原因总结
  • windows安全软件之火绒杀毒的密码忘记后处理
  • C++ | Leetcode C++题解之第371题两整数之和
  • Java排序算法详解
  • Easysearch 性能测试方法概要
  • 《纳瓦尔宝典》-- 读书笔记
  • 深度学习与神经网络戴做讲解
  • 第1章-04-Chrome及Chrome Driver安装及测试
  • JavaScript 文件上传详解与实现
  • 【跃迁之路】【477天】刻意练习系列236(2018.05.28)
  • bearychat的java client
  • django开发-定时任务的使用
  • docker-consul
  • ES6核心特性
  • Node项目之评分系统(二)- 数据库设计
  • Perseus-BERT——业内性能极致优化的BERT训练方案
  • python 装饰器(一)
  • QQ浏览器x5内核的兼容性问题
  • Quartz初级教程
  • React-Native - 收藏集 - 掘金
  • RxJS 实现摩斯密码(Morse) 【内附脑图】
  • Vue ES6 Jade Scss Webpack Gulp
  • Vue2.0 实现互斥
  • windows下mongoDB的环境配置
  • 基于axios的vue插件,让http请求更简单
  • 前端技术周刊 2019-02-11 Serverless
  • 推荐一款sublime text 3 支持JSX和es201x 代码格式化的插件
  • 微信小程序上拉加载:onReachBottom详解+设置触发距离
  • ​LeetCode解法汇总2583. 二叉树中的第 K 大层和
  • #前后端分离# 头条发布系统
  • $分析了六十多年间100万字的政府工作报告,我看到了这样的变迁
  • (C++)栈的链式存储结构(出栈、入栈、判空、遍历、销毁)(数据结构与算法)
  • (C语言)共用体union的用法举例
  • (附源码)springboot助农电商系统 毕业设计 081919
  • (附源码)计算机毕业设计ssm本地美食推荐平台
  • (五十)第 7 章 图(有向图的十字链表存储)
  • * CIL library *(* CIL module *) : error LNK2005: _DllMain@12 already defined in mfcs120u.lib(dllmodu
  • ../depcomp: line 571: exec: g++: not found
  • .mkp勒索病毒解密方法|勒索病毒解决|勒索病毒恢复|数据库修复
  • .net core 连接数据库,通过数据库生成Modell
  • .NET 反射的使用
  • .net 中viewstate的原理和使用
  • .NET(C#) Internals: as a developer, .net framework in my eyes
  • .NET/C# 使窗口永不激活(No Activate 永不获得焦点)
  • .NET学习教程二——.net基础定义+VS常用设置
  • @Valid和@NotNull字段校验使用
  • [ HTML + CSS + Javascript ] 复盘尝试制作 2048 小游戏时遇到的问题
  • [2016.7.Test1] T1 三进制异或
  • [AIGC] Kong:一个强大的 API 网关和服务平台