SHOGUN是一款机器学习工具箱，主要是注重于大型的内核方法和支持矢量机（SVM）的工具箱。它提供了一个通用的SVM对象界面连接到不同的SVM实现和高效的内核实现。除了支持SVMs和回归分析，SHOGUN还具有一些像线性判别分析（LDA）、线性规划机（LPM）、（内核）感知器和算法隐藏Markov模型的线性方法。SHOGUN可用于C++、Matlab、R、Octave 和 Python。
SHOGUN 1.1.0该版本中引入了“转换器”概念，能够构建任意嵌入的功能。还包括了一些新的降维技术和降维工具箱显著的性能改进。其他改进包括一个重要的编译速度，各种模块化接口和算法错误修正，并提高了Cygwin、Mac OS X 和clang++ 的兼容性。Github上的问题是现在用于跟踪错误和问题。
The machine learning toolbox s focus is on large scale kernel methods and especially on Support Vector Machines (SVM) . It provides a generic SVM object interfacing to several different SVM implementations, among them the state of the art OCAS , Liblinear , LibSVM , SVMLight,  SVMLin  and GPDT . Each of the SVMs can be combined with a variety of kernels. The toolbox not only provides efficient implementations of the most common kernels, like the Linear, Polynomial, Gaussian and Sigmoid Kernel but also comes with a number of recent string kernels as e.g. the Locality Improved , Fischer , TOP , Spectrum , Weighted Degree Kernel (with shifts)   . For the latter the efficient LINADD  optimizations are implemented. For linear SVMs the COFFIN framework  allows for on-demand computing feature spaces on-the-fly, even allowing to mix sparse, dense and other data types. Furthermore, SHOGUN offers the freedom of working with custom pre-computed kernels. One of its key features is the combined kernel which can be constructed by a weighted linear combination of a number of sub-kernels, each of which not necessarily working on the same domain. An optimal sub-kernel weighting can be learned using Multiple Kernel Learning    . Currently SVM one-class, 2-class and multiclass classification and regression problems can be dealt with. However SHOGUN also implements a number of linear methods like Linear Discriminant Analysis (LDA), Linear Programming Machine (LPM), (Kernel) Perceptrons and features algorithms to train hidden markov models. The input feature-objects can be dense, sparse or strings and of type int/short/double/char and can be converted into different feature types. Chains of preprocessors (e.g. substracting the mean) can be attached to each feature object allowing for on-the-fly pre-processing.