knncolle
Collection of KNN methods in C++
Loading...
Searching...
No Matches
Collection of KNN algorithms

Unit tests Documentation Codecov

Overview

knncolle is a header-only C++ library that collects a variety of different k-nearest neighbor algorithms under a consistent interface. The aim is to enable downstream libraries to easily switch between different methods with a single runtime flag, or by just swapping out the relevant constructors at compile time.

The core library supports the following methods:

  • Vantage point tree, an exact search that uses the tree of the same name.
  • Brute force search, mostly implemented for testing.

This framework is extended by various add-on libraries to more algorithms:

Most of the code in this library is derived from the BiocNeighbors R package.

Quick start

Given a matrix with dimensions in the rows and observations in the columns, we can do:

int ndim = 10;
int nobs = 1000;
std::vector<double> matrix(ndim * nobs); // column-major dims x obs matrix.
// Wrap our data in a SimpleMatrix.
/* observation index */ int,
/* data type */ double
> mat(ndim, nobs, matrix.data());
// Build a VP-tree index with double-precision Euclidean distances.
/* observation index */ int,
/* data type */ double,
/* distance type */ double
> vp_builder(
/* data type = */ double,
/* distance type = */ double
>
);
auto vp_index = vp_builder.build_unique(mat);
// Find 10 nearest neighbors of every observation.
auto results = knncolle::find_nearest_neighbors(*vp_index, 10);
results[0].first; // indices of neighbors of the first observation
results[0].second; // distances to neighbors of the first observation
Compute Euclidean distances between two input vectors.
Definition distances.hpp:70
Simple wrapper for an in-memory matrix.
Definition Matrix.hpp:125
Perform a nearest neighbor search based on a vantage point (VP) tree.
Definition Vptree.hpp:419
Umbrella header for all algorithms.
NeighborList< Index_, Distance_ > find_nearest_neighbors(const Prebuilt< Index_, Data_, Distance_ > &index, int k, int num_threads=1)
Definition find_nearest_neighbors.hpp:105

Check out the reference documentation for more details.

Searching in more detail

We can perform the search manually by constructing a Searcher instance and looping over the elements of interest. Continuing with the same variables defined in the previous section, we could replace the find_nearest_neighbors() call with:

auto searcher = vp_index->initialize();
std::vector<int> indices;
std::vector<double> distances;
for (int o = 0; o < nobs; ++o) {
searcher->search(o, 10, &indices, &distances);
// Do something with the search results for 'o'.
}

Similarly, we can query the prebuilt index for the neighbors of an arbitrary vector. The code below searches for the nearest 5 neighbors to a query vector at the origin:

std::vector<double> query(ndim);
searcher->search(query.data(), 5, &indices, &distances);

To parallelize the loop, we just need to construct a separate Searcher (and the result vector) for each thread. This is already implemented in find_nearest_neighbors() but is also easy to do by hand, e.g., with OpenMP:

#pragma omp parallel num_threads(5)
{
auto searcher = vp_index->initialize();
std::vector<int> indices;
std::vector<double> distances;
#pragma omp for
for (int o = 0; o < nobs; ++o) {
searcher->search(o, 10, &indices, &distances);
// Do something with the search 'results' for 'o'.
}
}

Either (or both) of indices and distances may be NULL, in which case the corresponding values are not reported. This allows implementations to skip the extraction of distances when only the identities of the neighbors are of interest.

searcher->search(0, 5, &indices, NULL);

Finding all neighbors within range

A related problem involves finding all neighbors within a certain distance of an observation. This can be achieved using the Searcher::search_all() method:

if (seacher->can_search_all()) {
// Report all neighbors within a distance of 10 from the first point.
searcher->search_all(0, 10, &indices, &distances);
// Report all neighbors within a distance of 0.5 from a query point.
searcher->search_all(query.data(), 0.5, &indices, &distances);
}

This method is optional so developers of Searcher subclasses may choose to not implement it. Applications should check Searcher::can_search_all() before attempting a call, as shown above. Otherwise, the default method will raise an exception.

Polymoprhism via interfaces

All KNN search algorithms implement the Builder, Prebuilt and Searcher interfaces via inheritance. This means that users can swap algorithms at run-time:

auto dist_type = std::make_shared<knncolle::EuclideanDistance<double, double> >();
std::unique_ptr<knncolle::Builder<int, double, double> > ptr;
if (algorithm == "brute-force") {
} else if (algorithm == "vp-tree") {
} else {
// do something else
}
auto some_prebuilt = ptr->build_unique(mat);
auto some_results = knncolle::find_nearest_neighbors(*some_prebuilt, 10);
Perform a brute-force nearest neighbor search.
Definition Bruteforce.hpp:224

Similarly, for algorithms that accept a DistanceMetric, we can switch between distances at run-time:

std::shared_ptr<knncolle::DistanceMetric<double, double> > distptr;
if (distance == "euclidean") {
} else if (distance == "manhattan") {
} else {
// do something else.
}
knncolle::VptreeBuilder<int, double, double> vp_builder(std::move(distptr));
Compute Manhattan distances between two input vectors.
Definition distances.hpp:104

We can even switch between input matrix representations at run-time, as long as they follow the Matrix interface. This allows the various Builder classes to accept input data in other formats (e.g., sparse, file-backed). For example, knncolle implements the L2NormalizedMatrix subclass to apply on-the-fly L2 normalization of each observation's vector of coordinates. This is used inside the L2NormalizedBuilder class to transform an existing neighbor search method from Euclidean to cosine distances.

auto builder = std::make_shared<knncolle::VptreeBuilder<int, double, double> >(
);
auto l2builder = std::make_shared<knncolle::L2NormalizedBuilder<
/* observation index */ int,
/* data type */ double,
/* distance type */ double,
/* normalized type */ double
> >(std::move(builder));
// Any Matrix 'mat' is automatically wrapped in a L2NormalizedMatrix
// before being passed to 'builder->build_unique'.
auto l2index = l2builder->build_unique(mat);
std::unique_ptr< Prebuilt< Index_, Data_, Distance_ > > build_unique(const Matrix_ &data) const
Definition Builder.hpp:66
Wrapper around a builder with L2 normalization.
Definition L2Normalized.hpp:258

Check out the reference documentation for more details on these interfaces.

Modifying template parameters

Each interface has a few template parameters to define its types. In general, we recommend using ints for the observation indices and doubles for the data and distances. If precision is not a concern, we can achieve greater speed by swapping doubles with floats. We may also need to swap int with size_t for larger datasets, e.g., more than 2 billion observations.

Advanced users can set up the templates to bypass virtual dispatch at the cost of more compile-time complexity. For example, we could parametrize the VptreeBuilder so that it is hard-coded to use Euclidean distances and to only accept column-major in-memory matrices. This gives the compiler an opportunity to devirtualize the relevant method calls for a potential performance improvement.

int,
double,
double,
kncolle::SimpleMatrix<int, double>,
> VptreeEuclideanSimple;

Building projects with knncolle

CMake with FetchContent

If you're using CMake, you just need to add something like this to your CMakeLists.txt:

include(FetchContent)
FetchContent_Declare(
knncolle
GIT_REPOSITORY https://github.com/knncolle/knncolle
GIT_TAG master # or any version of interest
)
FetchContent_MakeAvailable(knncolle)

Then you can link to knncolle to make the headers available during compilation:

# For executables:
target_link_libraries(myexe knncolle::knncolle)
# For libaries
target_link_libraries(mylib INTERFACE knncolle::knncolle)

CMake with find_package()

find_package(knncolle_knncolle CONFIG REQUIRED)
target_link_libraries(mylib INTERFACE knncolle::knncolle)

To install the library, use:

mkdir build && cd build
cmake .. -DKNNCOLLE_TESTS=OFF
cmake --build . --target install

By default, this will use FetchContent to fetch all external dependencies. If you want to install them manually, use -DKNNCOLLE_FETCH_EXTERN=OFF. See extern/CMakeLists.txt to find compatible versions of each dependency.

Manual

If you're not using CMake, the simple approach is to just copy the files in include/ - either directly or with Git submodules - and include their path during compilation with, e.g., GCC's -I. The external dependencies listed in extern/CMakeLists.txt also need to be made available during compilation.

References

Wang X (2012). A fast exact k-nearest neighbors algorithm for high dimensional search using k-means clustering and triangle inequality. Proc Int Jt Conf Neural Netw, 43, 6:2351-2358.

Hanov S (2011). VP trees: A data structure for finding stuff fast. http://stevehanov.ca/blog/index.php?id=130

Yianilos PN (1993). Data structures and algorithms for nearest neighbor search in general metric spaces. Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, 311-321.