c++ library for local machine learning inference
blace.ai is a lightweight meta library & model hub for C++ that abstracts away OS differences and AI backends like PyTorch or ONNX Runtime. It enables fast, hardware-accelerated model inference on Windows, macOS, and Linux. With support for GPU acceleration (CUDA & Metal) and a modular graph-based API, blace.ai lets you build efficient, cacheable inference pipelines without worrying about memory management or low-level integration details.
Example graph of an application using blace.ai to inference Segment Anything masking on an image. Note that the result of the dark-orange node is cached internally, so if the image buffer stays the same the mask prediction will run very fast.
1#include "blace_ai.h"
2#include <opencv2/opencv.hpp>
3
4// include the models you want to use
5#include "depth_anything_v2_v1_small_v3_ALL_export_version_vNone.h"
6
7int main() {
8 ::blace::workload_management::BlaceWorld blace;
9
10 // load image into op
11 auto exe_path = blace::util::getPathToExe();
12 std::filesystem::path photo_path = exe_path / "butterfly.jpg";
13 auto world_tensor_orig =
14 CONSTRUCT_OP(blace::ops::FromImageFileOp(photo_path.string()));
15
16 // interpolate to size consumable by model
17 auto interpolated = CONSTRUCT_OP(blace::ops::Interpolate2DOp(
18 world_tensor_orig, 700, 1288, blace::ml_core::BICUBIC, false, true));
19
20 // construct model inference arguments
21 blace::ml_core::InferenceArgsCollection infer_args;
22 infer_args.inference_args.device = blace::util::get_accelerator().value();
23
24 // construct inference operation
25 auto infer_op = depth_anything_v2_v1_small_v3_ALL_export_version_vNone_run(
26 interpolated, 0, infer_args, blace::util::getPathToExe().string());
27
28 // normalize depth to zero-one range
29 auto result_depth = CONSTRUCT_OP(blace::ops::NormalizeToZeroOneOP(infer_op));
30
31 // construct evaluator and evaluate to cv::Mat
32 blace::computation_graph::GraphEvaluator evaluator(result_depth);
33 auto cv_result = evaluator.evaluateToCVMat().value();
34
35 // multiply for plotting
36 cv_result *= 255.;
37
38 // save to disk and return
39 auto out_file = exe_path / "depth_result.png";
40 cv::imwrite(out_file.string(), cv_result);
41
42 return 0;
43}
Example implementation for Depth Anything v2 inference.