Darwin  1.10(beta)
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
Darwin Tutorial

This tutorial is designed to give you a basic flavour for the design and use of the common features of the Darwin libraries. It assumes you are familiar with C++ and the Linux development environment.

The tutorial is divided into a number of parts, each exploring a different library within the Darwin framework. The parts are:

Source code for all the tutorials is included in the projects/tutorial directory.

Hello World

This tutorial steps you through creating your first application using the Darwin framework under Linux—the application will simply print "hello world" using Darwin's logging system (see Messages, Warnings and Errors). It assumes that you have successfully downloaded and compiled the Darwin libraries (see Downloading and Installing). Throughout the tutorial we will assume that $DARWIN is the location (i.e., directory) where Darwin is installed.

Create a Project Directory

mkdir helloworld
cd helloworld

Create the Application

Create a file called helloworld.cpp containing the following code:

// c++ standard headers
#include <cstdlib>
#include <cstdio>
// darwin library headers
#include "drwnBase.h"
using namespace std;
// usage ---------------------------------------------------------------------
void usage()
{
cerr << DRWN_USAGE_HEADER << endl;
cerr << "USAGE: ./helloworld [OPTIONS]\n";
cerr << "OPTIONS:\n"
<< DRWN_STANDARD_OPTIONS_USAGE
<< endl;
}
// main ----------------------------------------------------------------------
int main(int argc, char *argv[])
{
// process commandline arguments
DRWN_BEGIN_CMDLINE_PROCESSING(argc, argv)
DRWN_END_CMDLINE_PROCESSING(usage());
// print hello world and exit
DRWN_LOG_MESSAGE("hello world");
DRWN_LOG_VERBOSE("hello world again");
return 0;
}

Create a Makefile

Create a file called Makefile containing the following:

INC_DIRS = -I${DARWIN}/include -I${DARWIN}/external
LIBS = -L${DARWIN}/bin -ldrwnML -ldrwnPGM -ldrwnIO -ldrwnBase -lm -lpthread
main:
g++ -g -o helloworld helloworld.cpp ${INC_DIRS} ${LIBS}
clean:
rm -f helloworld
Note
On Mac OS X systems you may need to add -stdlib=libstdc++ to the end of the g++ command. You may also need to add -D__APPLE. On Linux systems (e.g., Ubuntu) you may need to add -D__LINUX__.

Make and Test the Application

The following shell commands will build the helloworld application and run it with some different settings.

make
./helloworld
./helloworld -verbose
./helloworld -quiet
./helloworld -set drwnLogger logLevel ERROR
./helloworld -log log.txt
cat log.txt
./helloworld -help
./helloworld -config

Base and IO Libraries (drwnBase and drwnIO)

Command Line Processing

Darwin provides a number of mechanisms for configuring applications from the command-ine. This is a very basic tutorial to get you familiar with the two main mechanisms—commandline options and XML configuration. Be sure to read the commandline processing documentation (Command Line Processing) and the configuration module documentation (Configuration Manager).

The following code is an example of how to configure a Darwin application either via the commandline options or an XML configuration file.

// c++ standard headers
#include <cstdlib>
#include <cstdio>
// darwin library headers
#include "drwnBase.h"
using namespace std;
// configuration manager -----------------------------------------------------
class MyConfigureableModule : public drwnConfigurableModule {
public:
string INPUT_STRING;
public:
MyConfigureableModule() : drwnConfigurableModule("MyModule") { }
~MyConfigureableModule() { }
void usage(ostream &os) const {
os << " This is an example configurable module\n";
os << " input :: message string\n";
}
void setConfiguration(const char *name, const char *value) {
if (!strcmp(name, "input")) {
INPUT_STRING = string(value);
} else {
DRWN_LOG_FATAL("unrecognized configuration option " << name << " for " << this->name());
}
}
};
static MyConfigureableModule gMyConfig;
// usage ---------------------------------------------------------------------
void usage()
{
cerr << DRWN_USAGE_HEADER << endl;
cerr << "USAGE: ./cmdline [OPTIONS]\n";
cerr << "OPTIONS:\n"
<< " -i <message> :: input message\n"
<< DRWN_STANDARD_OPTIONS_USAGE
<< endl;
}
// main ----------------------------------------------------------------------
int main(int argc, char* argv[])
{
const char *input = NULL;
DRWN_BEGIN_CMDLINE_PROCESSING(argc, argv)
DRWN_CMDLINE_STR_OPTION("-i", input)
DRWN_END_CMDLINE_PROCESSING(usage());
if (input != NULL)
gMyConfig.INPUT_STRING = string(input);
cout << "Input message is \"" << gMyConfig.INPUT_STRING << "\"\n";
return 0;
}

Note that there is no semicolon (";") after the intermediate commandline processing macros.

Create the following configuration file called "myConfig.xml":

<drwn>
<MyModule input="hello" />
</drwn>

Compile the code and then run with the following options:

./cmdline -i "hello"
./cmdline -set MyModule input "hello"
./cmdline -config myConfig.xml
./cmdline -set MyModule output "hello"
./cmdline -help
./cmdline -set MyModule
./cmdline -i "hello" -i "goodbye"
./cmdline -i "hello" -set MyModule input "goodbye"
./cmdline -set MyModule input "hello" -i "goodbye"
See Also
drwnBase, drwnIO

XML I/O

This tutorial demonstrates how to make use of Darwin's XML utilities for serializing objects to disk. Any class derived from drwnWriteable will have access to read and write methods that read and write XML files to disk. The derived class will need to override the load and save methods.

We will begin by creating a serializeable class that holds a person's name and age.

class Person : public drwnWriteable {
protected:
string _name;
int _age;
public:
Person() : _age(0) { /* do nothing */ }
Person(const string& name, int age) : _name(name), _age(age) { /* do nothing */ }
const char* type() const { return "Person"; }
bool save(drwnXMLNode& xml) const {
drwnAddXMLAttribute(xml, "name", _name.c_str(), false);
drwnAddXMLAttribute(xml, "age", toString(_age).c_str(), false);
return true;
}
bool load(drwnXMLNode& xml) {
_name = string(drwnGetXMLAttribute(xml, "name"));
_age = atoi(drwnGetXMLAttribute(xml, "age"));
return true;
}
};

Of course, to be useful this class would need to implement a number of other methods (for example, member variable accessors). However, this lean implementation is enough for our purposes in this tutorial.

We now create a class that aggregates many people into a single group.

class Group : public drwnWriteable {
protected:
vector<Person> _people;
public:
Group() { /* do nothing */ }
const char* type() const { return "Group"; }
bool save(drwnXMLNode& xml) const {
drwnXMLUtils::save(xml, "Person", _people);
return true;
}
bool load(drwnXMLNode& xml) {
_people.clear();
drwnXMLUtils::load(xml, "Person", _people);
return true;
}
void addMember(const Person& person) {
_people.push_back(person);
}
};

Finally, we create an application that populates a group of people and writes the group to disk.

int main()
{
Group group;
group.addMember(Person("Bart", 10));
group.addMember(Person("Lisa", 8));
group.addMember(Person("Maggie", 1));
group.write("TheSimpsons.xml");
group.dump();
return 0;
}

Compile this program (you will need to write the appropriate makefile), run it, and examine the output.

See Also
drwnBase, drwnIO
drwnXMLParser, XML Utilities

Persistent Storage Management

Often we wish to store large volumes of binary data between sessions where XML files may be too inefficient or memory intensive. The drwnPersistentStorage class provides a simple mechanism for storing and managing the reading and writing of multiple data records for such situations.

Suppose the data we wish to store is of type std::vector<double>. The first step is to provide a drwnPersistentRecord wrapper:

class Record : public drwnPersistentRecord {
public:
vector<double> data;
public:
Record(size_t n = 0) { data.resize(n); }
~Record() { /* do nothing */ }
// number of bytes needed to represent the data on disk include length field
size_t numBytesOnDisk() const { return data.size() * sizeof(double) + sizeof(size_t); }
// write the data to disk
bool write(ostream& os) const {
size_t rows = data.size();
os.write((char *)&rows, sizeof(size_t));
if (rows > 0) {
os.write((char *)&data[0], rows * sizeof(double));
}
return true;
}
// read the data from disk
bool read(istream& is) {
size_t rows = 0;
is.read((char *)&rows, sizeof(size_t));
data.resize(rows);
if (rows > 0) {
is.read((char *)&data[0], rows * sizeof(double));
}
return true;
}
};
Note
The Darwin library provides the templated convenience class drwnPersistentVectorRecord for storing vectors of objects. The class is implemented similar to the above.

We can now easily read, write and modify data records.

// open a persistent storage object
storage.open("/tmp/data");
// write some records
Record rec(1000);
storage.write("record A", &rec);
rec = Record(2000);
storage.write("record B", &rec);
// read back record A
storage.read("record A", &rec);
// print number of bytes used
cout << storage.numTotalBytes() << " bytes stored\n";

The data is stored permenantly when the application closes and can be read later by re-opening the same storage file.

Warning
Abnormal termination of an application may result in corrupt data storage.
Note
The drwnPersistentRecord interface can co-exists with the drwnWriteable interface allowing objects to be written either to XML files or to a persistent Darwin storage file.
See Also
drwnBase, drwnIO

Machine Learning Library (drwnML)

The drwnML library implements a number of basic machine learning and pattern recognition algorithms. These are explored in the following tutorials.

Gaussian Distributions

Darwin provides functionality for estimating the parameters of, sampling from, computing sample likelihoods, and conditioning on multi-dimensional Gaussian and mixture of Gaussian probability distributions. The following example shows how to define a two-dimensional Gaussian and then condition on the first variable to produce some one-dimensional Gaussian distributions.

// define mean and covariance
VectorXd mu(2);
mu << 0.5, 0.5;
MatrixXd sigma(2, 2);
sigma << 0.0964, -0.0505, -0.0505, 0.0540;
// create gaussian P(x_1, x_2)
drwnGaussian g(mu, sigma);
// create conditional gaussian P(x_2 | x_1 = 1.0)
map<int, double> x;
x[0] = 1.0;
drwnGaussian g2 = g.reduce(x);
// show mean vector
DRWN_LOG_MESSAGE("mean = " << g2.mean().transpose());

When performing multiple repeated conditioning on large Gaussians, creating an intermediate drwnConditionalGaussian object is more efficient since it will cache some intermediate calculations.

Exercise. Extend the above example to produce samples from the conditional distribution.

See Also
drwnBase, drwnIO, drwnML

Feature Transforms

The drwnFeatureTransform class defines the interface for supervised and unsupervised feature vector transformations, e.g., Fisher's LDA (supervised) or PCA (unsupervised). For many machine learning algorithms pre-processing features to have zero mean and unit variance will result in better numerical stability of the algorithm. This tutorial shows such an example.

// construct features vectors
vector<vector<double> > x(100);
for (unsigned i = 0; i < x.size(); i++) {
x[i].resize(10);
for (unsigned j = 0; j < x[i].size(); j++) {
x[i][j] = (double)j + drand48();
}
}
// learn normalization scale and offset
whitener.train(x);
// normalize all feature vectors in-place
whitener.transform(x);

Once the feature transformation has been learned it can be applied to arbitrary feature vectors (i.e., not just those that where used during learning). For example,

// construct new feature vector
vector<double> y(10, 0);
// apply previously learned transform
whitener.transform(y);

The same data can be trasnformed via Principal Component Analysis (PCA). In the following example we set the number of output dimensions to 5.

// estimate principal directions
drwnPCA pca;
pca.setProperty("outputDim", 5);
pca.train(x);
// transform data from x to y
vector<vector<double> > y;
pca.transform(x, y);

We can also perform PCA on a transformed feature space, e.g.,

// estimate principal directions on modified feature space
drwnPCA pca;
pca.setProperty("outputDim", 5);
// transform data via pca on modified feature space

Here the feature vectors are automatically transformed via the drwnSquareFeatureMap before learning or applying PCA.

See Also
drwnBase, drwnIO, drwnML

Classification and Regression

Classification and regression and the two most fundamental tasks in supervised machine learning. Both tasks take as input a feature vector and produce as output a prediction. In the case of classification the prediction is a label from a small discrete set, whereas in the case of regression the prediction is a real-valued number.

Linear Regression

The following example shows how to use the drwnTLinearRegressor class for extrapolating a straight-line through some data points.

double x[] = {1.0, 2.0, 3.0};
double y[] = {0.5, 1.7, 2.3};
// construct dataset
for (int i = 0; i < 3; i++) {
dataset.append(vector<double>(1, x[i]), y[i]);
}
// learn linear regression model
model.train(dataset);
// show points from 0 to 5
for (double i = 0.0; i <= 5.0; i += 1.0) {
DRWN_LOG_MESSAGE("value at x=" << i << " is "
<< model.getRegression(vector<double>(1, i)));
}

Classification

The following code shows a how to use the drwnClassifier class.

// parameters
const double SIGNAL2NOISE = 1.5;
const int NUMPOSSAMPLES = 10;
const int NUMNEGSAMPLES = 20;
// construct dataset
for (int i = 0; i < NUMPOSSAMPLES; i++) {
dataset.append(vector<double>(1, SIGNAL2NOISE * 2.0 * (drand48() - 1.0)), 1);
}
for (int i = 0; i < NUMNEGSAMPLES; i++) {
dataset.append(vector<double>(1, SIGNAL2NOISE * 2.0 * (drand48() - 1.0) + 1.0), 0);
}
// learn and evaluate different classifiers
for (int i = 0; i < 3; i++) {
// instantiate the classifier
if (i == 0) {
DRWN_LOG_MESSAGE("logistic classifier");
} else if (i == 1) {
DRWN_LOG_MESSAGE("boosted classifier");
model = new drwnBoostedClassifier(1, 2);
} else {
DRWN_LOG_MESSAGE("decision-tree classifier");
model = new drwnDecisionTree(1, 2);
}
// learn the paremeters
model->train(dataset);
// show confusion matrix (on the training set)
drwnConfusionMatrix confusion(2, 2);
confusion.accumulate(dataset, model);
confusion.printCounts();
delete model;
}

The feature mapping used by the multi-class logistic classifier can be easily changed. For example, to augment the square of each feature use

The code snippet above shows analysis of the results on the training set by printing a confusion matrix. Analysis should really be done on a separate hold-out set. In addition to the confusion matrix you may be interested in plotting a precision-recall curve (see drwnClassificationResults).

See Also
drwnBase, drwnIO, drwnML

Probablistic Graphical Models Library (drwnPGM)

The drwnPGM library provides infrastructure for representing and manipulating graphical models such as Bayesian Networks and Markov random fields. This section gives you an overview of some of the functionality of the library. We begin by demonstrating construction of a factor over a set of discrete variables and then move onto a simple energy minimization example followed by a Viterbi decoding example. The tutorial ends with some advanced topics, specifically, creating factor operations and sharing factor storage.

Factor Construction

A factor is a table containing a value for each assignment to a set of random variables. The following code shows how to construct a factor psi over two variables A and B.

// define variables
universe->addVariable(3, "A"); // variable A has cardinality 3
universe->addVariable(2, "B"); // variable B has cardinality 2
// construct factor (populated with random entries)
drwnTableFactor psi(universe);
psi.addVariable(0);
psi.addVariable(1);
int indx = 0;
for (int b = 0; b < universe->varCardinality(1); b++) {
for (int a = 0; a < universe->varCardinality(0); a++) {
psi[indx++] = drand48();
}
}
Note
You can also set values for a factor using the drwnFactor::setValueOf function.
See Also
drwnBase, drwnIO, drwnPGM

Basic Factor Operations

When developing algorithms we often wish to perform a core set of operations on the factors in the graphical model. These operations are encapsulated by the drwnFactorOperation class and classes derived from it. The following code snippets show some simple operation use-cases. More complex examples are demonstrated in subsequent sections of this tutorial.

// use the example code in the previous section to generate
// a joint probability distribution P = 1/Z exp(-E)
drwnTableFactor prJoint(psi);
prJoint.scale(-1.0);
// compute the marginal over variable "A" by marginalizing out variable "B"
drwnTableFactor prA(universe);
drwnFactorMarginalizeOp(&prA, &prJoint, universe->findVariable("B")).execute();
// compute the conditional distribution of variable "A" given variable "B"
// is assigned value 0
drwnTableFactor prAGivenB0(universe);
drwnFactorReduceOp(&prAGivenB0, &prJoint, universe->findVariable("B"), 0).execute();
See Also
drwnBase, drwnIO, drwnPGM

Energy Minimization

Consider the following energy minimization problem over discrete random variables x,

\[ \textrm{minimize} \; \sum_{c} \psi_c(\mathbf{x}_c) \]

A large number of algorithms have been proposed for (approximately or exactly) solving this problem depending on the structure and properties of the energy function. Darwin implements a number of these algorithms and in this tutorial we demonstrate using the ICM algorithm (Besag, 1975).

We first construct a factor graph representation of the energy function.

// variable universe: 5 variables each of cardinality 3
drwnVarUniversePtr universe(new drwnVarUniverse(5, 3));
// create two factors (populated with random entries)
drwnTableFactor *ABC = new drwnTableFactor(universe);
ABC->addVariable(0); ABC->addVariable(1); ABC->addVariable(2);
for (int xc = 0; xc < ABC->entries(); xc++) {
(*ABC)[xc] = drand48();
}
drwnTableFactor *CDE = new drwnTableFactor(universe);
CDE->addVariable(2); CDE->addVariable(3); CDE->addVariable(4);
for (int xc = 0; xc < CDE->entries(); xc++) {
(*CDE)[xc] = drand48();
}
// add to factor graph
// (the graph takes ownership so no need to free factors)
graph.addFactor(ABC);
graph.addFactor(CDE);

Now let's try use ICM to compute the minimum-energy assignment:

drwnICMInference icm(graph);
drwnFullAssignment assignment;
icm.inference(assignment);
DRWN_LOG_MESSAGE("assignment has energy " << graph.getEnergy(assignment));
DRWN_LOG_VERBOSE("assignment is " << toString(assignment));

For low treewidth graphs we can use the junction tree algorithm, which first triangulates the graph, to compute the exact MAP assignment:

drwnFullAssignment assignment;
jta.inference(assignment);
DRWN_LOG_MESSAGE("assignment has energy " << graph.getEnergy(assignment));
DRWN_LOG_VERBOSE("map assignment is " << toString(assignment));
See Also
MAP Inference (Energy Minimization)

Example: Viterbi Decoding

We now provide an example of implementing the Viterbi algorithm using the factors and factor operations available in the library. The Viterbi algorithm is a method for finding the most likely sequence of states in a Markov model. For this tutorial we will assume that our model is a chain defined over T time steps by

\[ P(\mathbf{x}) \propto \prod_{t = 1}^{T - 1} \Psi_{t}(x_t, x_{t+1}) \]

The following code implements the viterbi algorithm on a sequence of factors. Note that this implementation is far from optimal and intended for illustrative purposes only. Practical implementations of the algorithm would store the argmax during the forward pass to avoid the reduction computation in the backward pass. The normalization of the messages is required to avoid overflow. Some viterbi implementations avoid this by performing the computation in log-space. Try this as an exercise.

drwnFullAssignment viterbi(vector<drwnTableFactor>& factors)
{
const int T = (int)factors.size() + 1;
drwnVarUniversePtr universe(factors[0].getUniverse());
// check input assumptions
for (int t = 0; t < T - 1; t++) {
DRWN_ASSERT(factors[t].size() == 2); // factor is pairwise
DRWN_ASSERT(factors[t].hasVariable(t)); // factor has variable x_t
DRWN_ASSERT(factors[t].hasVariable(t+1)); // factor has variable x_{t+1}
}
// forward pass (modifies factors)
vector<drwnTableFactor> messages(T - 1, drwnTableFactor(universe));
drwnFactorMaximizeOp(&messages[0], &factors[0], 0).execute();
drwnFactorNormalizeOp(&messages[0]).execute();
for (int t = 1; t < T - 1; t++) {
drwnFactorTimesEqualsOp(&factors[t], &messages[t - 1]).execute();
drwnFactorMaximizeOp(&messages[t], &factors[t], t).execute();
drwnFactorNormalizeOp(&messages[t]).execute();
}
// backward pass
drwnFullAssignment mapAssignment(T);
mapAssignment[T - 1] = messages[T - 2].indexOfMax();
for (int t = T - 2; t >= 0; t--) {
drwnTableFactor mu(universe);
drwnFactorReduceOp(&mu, &factors[t], t + 1, mapAssignment[t + 1]).execute();
mapAssignment[t] = mu.indexOfMax();
}
return mapAssignment;
}

The following main function provides an example of setting up a Markov chain and calling the viterbi code.

int main()
{
// define universe
const int T = 5; // number of time steps
const int N = 3; // number of states per variable
drwnVarUniversePtr universe(new drwnVarUniverse(T, N));
// create markov chain
vector<drwnTableFactor> factors(T - 1, drwnTableFactor(universe));
for (int t = 0; t < T - 1; t++) {
factors[t].addVariable(t);
factors[t].addVariable(t + 1);
for (int i = 0; i < N * N; i++) {
factors[t][i] = drand48();
}
factors[t].normalize();
factors[t].dump();
}
// run viterbi
drwnFullAssignment assignment = viterbi(factors);
DRWN_LOG_MESSAGE("x^\\star = " << toString(assignment));
return 0;
}
See Also
drwnBase, drwnIO, drwnPGM

Writing New Factor Operations

In this tutorial we demonstrate how to write a new factor operation that computes the dot-product between two factors, i.e., given $\psi_A(\mathbf{v}_A)$ and $\psi_B(\mathbf{v}_B)$ with $\mathbf{v}_A \in {\cal V}_A$ and $\mathbf{v}_B \in {\cal V}_B$ will compute

\[ \sum_{\mathbf{v} \in {\cal V}_A \cup {\cal V}_{B}} \psi_A([\mathbf{v}]_A) \psi_B([\mathbf{v}]_B) \]

We first define the operation class. Most factor operations produce another factors and so should be derived from the drwnTableFactorOp base class. However, our factor computes a real number which we will return through a results pointer. The class is defined below. It contains a pointer to each of the input factors and the mapping between corresponding entries in those factors.

class DotProductOp {
protected:
double *_result;
const drwnTableFactor * const _A;
const drwnTableFactor * const _B;
public:
DotProductOp(double *result, const drwnTableFactor *A, const drwnTableFactor *B);
~DotProductOp() { /* do nothing */ };
void execute();
};

The constructor computes the union of the variables in each factor and the mapping between factors and this variable superset. Since the mappings are initialized on construction repeated execution of the operation (say, in an iterative algorithm where the contents of the factors are changing) are relatively cheap.

DotProductOp::DotProductOp(double *result, const drwnTableFactor *A, const drwnTableFactor *B) :
_result(result), _A(A), _B(B)
{
c.insert(_A->getOrderedVars().begin(), _A->getOrderedVars().end());
c.insert(_B->getOrderedVars().begin(), _B->getOrderedVars().end());
vector<int> vars(c.begin(), c.end());
_mappingA = drwnTableFactorMapping(vars, _A->getOrderedVars(), _A->getUniverse());
_mappingB = drwnTableFactorMapping(vars, _B->getOrderedVars(), _B->getUniverse());
}

The execute method performs the dot-product operation. In our case its behaviour is to virtually expand each factor to be over the full label space of the union of variables by replicating entries. It then sums the product of corresponding entries in these two virtual factors.

void DotProductOp::execute()
{
drwnTableFactorMapping::iterator ia(_mappingA.begin());
_result = 0.0;
while (ia != _mappingA.end()) {
_result += (*_A)[*ia++] * (*_B)[*ib++];
}
}

The following main() function shows example invocations of the the DotProduct class. The tutorial project contains a more elaborate example.

int main()
{
// define variables
universe->addVariable(3, "a");
universe->addVariable(3, "b");
// dot-product between unary and pairwise factors
drwnTableFactor psiA(universe);
psiA.addVariable("a");
psiA[0] = -1.0; psiA[1] = 0.0; psiA[2] = 1.0;
drwnTableFactor psiB(universe);
psiB.addVariable("b");
psiB[0] = 1.0; psiB[1] = 0.0; psiB[2] = 0.0;
drwnTableFactor psiAB(universe);
psiAB.addVariables("b", "a", NULL);
psiAB[0] = 1.0; psiAB[1] = 2.0; psiAB[2] = 3.0;
psiAB[3] = 4.0; psiAB[4] = 5.0; psiAB[5] = 6.0;
psiAB[6] = 7.0; psiAB[7] = 8.0; psiAB[8] = 9.0;
double result;
DotProductOp(&result, &psiAB, &psiA2).execute();
DRWN_LOG_MESSAGE("dot-product between A,B:[1 2 3; 4 5 6; 7 8 9] and A:[-1 0 1] is " << result);
DotProductOp(&result, &psiAB, &psiB).execute();
DRWN_LOG_MESSAGE("dot-product between A,B:[1 2 3; 4 5 6; 7 8 9] and B:[1 0 0] is " << result);
return 0;
}
See Also
drwnBase, drwnIO, drwnPGM

Shared Factor Storage

Sometimes the same values appear in many different factors (i.e., over different variables) or factors are used to store temporary results at different times in the execution of an algorithm. In these situations the drwnTableFactorStorage class can be used to reduce memory footprint and limit repeated memory allocations and deallocation. The following code snippet provides a simple example of sharing values in many factors.

const int N = 10;
drwnVarUniversePtr universe(new drwnVarUniverse(N, 2));
// add factors with shared storage
for (int i = 0; i < N - 1; i++) {
drwnTableFactor *psi = new drwnTableFactor(universe, &storage);
psi->addVariable(i);
psi->addVariable(i + 1);
graph.addFactor(psi);
}
// populate shared storage
for (int i = 0; i < storage.capacity(); i++) {
storage[i] = drand48();
}
// show factor graph
graph.dump();
Warning
A factor that is produced as the output of an operation should not use the same shared storage as any of the input factors as this may result in memory corruption.
See Also
drwnBase, drwnIO, drwnPGM

Where next?

You are now ready to start your own project using Darwin or explore some of the existing applications and projects. If you are interested in contributing to the library email steph.nosp@m.en.g.nosp@m.ould@.nosp@m.anu..nosp@m.edu.a.nosp@m.u.

See Also
Coding Guidelines
Applications and Project Descriptions