Pages

Message Queue

The boost interprocess message queue lets different threads put and get messages on queue available even to different processes.

Each message has a priority, a length, and the data associated.

It is possible to get and put messages on the queue in three different ways: blocking, try, timed.

This message queue works with raw bytes, so it is not possible to manage directly instantiation of classes that are not trivial: it could be used to exchange int items, but not std::string. To overcome this limitation we could use the boost serialization library.

On message queue construction we must specify how the object should be generated (create and/or open), its name, and the message max number and size. At end of usage we should explicitly remove the message queue calling message_queue::remove().

In the simple example below a producer process creates a message queue of C strings (in the sense of array of chars) and sends a few messages. The consumer process read the messages and output them to the console.

In the main we select to execute the producer or the consumer accordingly to the number of argument passed to the executable:
if(argc == 2)
  ip11a(argv[1]);
else
  ip11b();
If we try to run the consumer before the producer we get an exception.

Here is the code:
#include <iostream>
#include <cstring>
#include <vector>
#include <string>

#include <boost/interprocess/ipc/message_queue.hpp>
#include <boost/scoped_ptr.hpp>

using namespace boost::interprocess;

namespace
{
  const char* MQ_NAME = "MessageQueue";
  const int MQ_MSG_NR = 10;
  const int MSG_NR = MQ_MSG_NR * 2; // 1

  class QueueManager
  {
  private:
    bool drop_; // 2
    boost::scoped_ptr<message_queue> mq_; // 3

    void remove() { message_queue::remove(MQ_NAME); }
  public:
    enum { MSG_SIZE = 80 };

    // ctor for producer
    QueueManager(int maxNr) : drop_(false)
    {
      remove();
      mq_.reset(new message_queue(create_only, MQ_NAME, maxNr, MSG_SIZE));
    }

    // ctor for consumer
    QueueManager() : drop_(true), mq_(new message_queue(open_only, MQ_NAME)) {}

    ~QueueManager() { if(drop_) remove(); }

    void send(const char* id, int i)
    {
      char buffer[MSG_SIZE];
      sprintf(buffer, "%s_%d", id, i);

      mq_->send(buffer, MSG_SIZE, 0);
    }

    std::string receive()
    {
      char buffer[MSG_SIZE];

      unsigned int priority;
      std::size_t recvd_size;
      mq_->receive(&buffer, MSG_SIZE, recvd_size, priority);

      return std::string(buffer);
    }

    static bool checkIdLen(const char* id)
    {
      if(strlen(id) > QueueManager::MSG_SIZE - 5)
      {
        std::cout << "The specified id [" << id << "] is too long" << std::endl;
        return false;
      }
      return true;
    }
  };
}

void ip11a(const char* id)
{
  std::cout << "Starting producer ..." << std::endl;
  if(QueueManager::checkIdLen(id) == false)
    return;

  try
  {
    QueueManager qm(MQ_MSG_NR);

    std::cout << "Sending messages: ";
    for(int i = 0; i < MSG_NR; ++i)
    {
      qm.send(id, i);
      std::cout << i << ' ';
    }
    std::cout << std::endl;
  }
  catch(interprocess_exception &ex)
  {
    std::cout << ex.what() << std::endl;
    return;
  }

  std::cout << "done" << std::endl;
}

void ip11b()
{
  std::cout << "Starting consumer ..." << std::endl;

  std::vector<std::string> vec;
  vec.reserve(MSG_NR);

  try
  {
    QueueManager qm;

    for(int i = 0; i < MSG_NR; ++i)
    {
      vec.push_back(qm.receive());
      std::cout << '.';
    }
    std::cout << std::endl;
  }
  catch(interprocess_exception &ex)
  {
    std::cout << ex.what() << std::endl;
    return;
  }

  std::copy(vec.begin(), vec.end(), std::ostream_iterator<std::string>(std::cout, " "));
  std::cout << std::endl;
}
  1. Just to have a thrill, we'll send to the queue 2x its maximum capacity of messages.
  2. The QueueManager member drop_ is set to true when we want to remove the message queue in the destructor.
  3. In the costructor for the producer we can initialize the message queue object only after removing an eventually pending previous message queue with the same name. But a message_queue can't be created empty and then filled with the required information, so we use a pointer, or better, a smart pointer to stay on the safe side.
The code is based on an example provided by the Boost Library Documentation.

Go to the full post

Anonymous semaphore

Instead of using a condition, we can synchronize processes using the semaphore concept. A semaphore is initializated with an integer value, and we use it calling wait() on it to check if it is currently greater that zero, and decreasing it, otherwise we are kept waiting for our turn. To signal to the other users that a resource is made available, we call post() on the semaphore, that increases the internal counter and notify one other process.

Initializing a semaphore to one we get a so called binary semaphore, actually equivalent to a mutex where wait() is like locking it, and post() is like unlocking it.

The special feature of semaphore is that post() and wait() could be called in different threads or processes.

In the example we see a buffer containing an array of integer that is accessed by two processes: a producer and a consumer. There are three semaphores on it. One used as a mutex, the other two to avoid reading or writing to the array were that should cause loss of data.

In the main of the application this code
if(argc == 2)
ip10a(argv[1]);
else
ip10b();

is used to determine if the current process is a producer or a consumer, accordingly to that we should firstly call the application with an argument, to create the producer. If not we'll get an exception from the consumer, since it expects the shared memory to be already created.

Here is the code:
#include <iostream>

#include "boost/interprocess/sync/interprocess_semaphore.hpp"
#include "boost/interprocess/shared_memory_object.hpp"
#include "boost/interprocess/mapped_region.hpp"
#include "boost/thread/thread.hpp"

using namespace boost::interprocess;

namespace
{
const char* MY_SHARED = "MySharedMemory";

class SharedBuffer
{
private:
enum { NumItems = 10 };

interprocess_semaphore sMutex_; // for exclusive access
interprocess_semaphore sFull_; // to avoid overflow
interprocess_semaphore sEmpty_; // to avoid underflow

int items[NumItems];
public:
SharedBuffer() : sMutex_(1), sFull_(NumItems), sEmpty_(0) {} // 1.

void put(int i) // 2.
{
sFull_.wait(); // wait if the buffer is full
sMutex_.wait(); // wait for exclusive access
items[i % SharedBuffer::NumItems] = i;
sMutex_.post(); // done exclusive access
sEmpty_.post(); // notify one new item available
}

int get(int i) // 3.
{
int result;

sEmpty_.wait(); // wait if buffer is empty
sMutex_.wait(); // wait for exclusive access
result = items[i % SharedBuffer::NumItems];
sMutex_.post(); // done exclusive access
sFull_.post(); // notify one item consumed

return result;
}
};

class SMManager
{
private:
std::string name_;
shared_memory_object shm_;
mapped_region region_;
SharedBuffer* sb_;

void remove() { shared_memory_object::remove(name_.c_str()); }
public:
SMManager(const char* name, bool create) : name_(name)
{
if(create)
{
remove();

shared_memory_object shm(create_only, name, read_write);
shm.truncate(sizeof(SharedBuffer));
shm_.swap(shm);
}
else
{
shared_memory_object shm(open_only, name, read_write);
shm_.swap(shm);
}

mapped_region region(shm_, read_write);
region_.swap(region);

void* addr = region_.get_address();
sb_ = create ? new (addr) SharedBuffer : static_cast<SharedBuffer*>(addr);
}

~SMManager() { remove(); }

SharedBuffer* getSharedBuffer() { return sb_; }
};
}

void ip10a(const char* id)
{
std::cout << "Producer " << id << " started" << std::endl;

try
{
SMManager smm(MY_SHARED, true);

SharedBuffer* data = smm.getSharedBuffer();
const int NumMsg = 50;

for(int i = 0; i < NumMsg; ++i)
{
data->put(i);

std::cout << i << ' ';
boost::this_thread::sleep(boost::posix_time::milliseconds(250));
}
}
catch(interprocess_exception &ex)
{
std::cout << ex.what() << std::endl;
return;
}

std::cout << std::endl << "Done!" << std::endl;
}

void ip10b()
{
std::cout << "Consumer started" << std::endl;

try
{
SMManager smm(MY_SHARED, false);

SharedBuffer* data = smm.getSharedBuffer();

const int NumMsg = 50;

int extractedData;

for(int i = 0; i < NumMsg; ++i)
{
extractedData = data->get(i);

std::cout << extractedData << ' ';
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
}
}
catch(interprocess_exception &ex)
{
std::cout << ex.what() << std::endl;
return;
}

std::cout << std::endl << "Done!" << std::endl;
}

1. the constructor for SharedBuffer initialize its three semaphores. A first one, sMutex_, is used as mutex - and that's why it gets its name, then we have sFull_, initialized to the array size, that it used to avoid to write elements in the array when this would lead to a data loss, and finally we have sEmpty_, that is used to avoid reading from the array when there is no item available.
2. SharedBuffer::put() is used by the producer to put a new item in the array. Before actually writing the value we should ensure the buffer is not full and that we have the exclusive access to the resource. After writing we release the mutex and notify that a new element has been inserted.
3. SharedBuffer::get() is used by the consumer and is symmetrical to the put() method.

To make the example a bit more interesting a couple of sleep() calls are put in the code for the producer and the consumer.

The code is based on an example provided by the Boost Library Documentation.

Go to the full post

Anonymous condition

Let's write an example to show how to use anonymous conditions in a multiprocess application using the boost interprocess (IPC) library. We write a producer/consumer that manage the exchange of message through a buffer built in the shared memory.

Two conditions are used to let the two process communicate to each other the change of buffer state.

The main of the application performs the selection between the two functions that specify if the process is actually the producer or the consumer. The producer, identified by an argument, should be launched first:
if(argc == 2)
ip09a(argv[1]);
else
ip09b();

And here is the actual code:

#include <iostream>
#include <cstdio>
#include <cstring>
#include <string>

#include "boost/interprocess/sync/interprocess_mutex.hpp"
#include "boost/interprocess/sync/interprocess_condition.hpp"
#include "boost/interprocess/sync/scoped_lock.hpp"
#include "boost/interprocess/shared_memory_object.hpp"
#include "boost/interprocess/mapped_region.hpp"

using namespace boost::interprocess;

namespace
{
const char* MY_SHARED = "MySharedMemory";

class SharedMessage // 1.
{
private:
enum { SIZE = 100 };
char message_[SIZE];

interprocess_mutex mutex_;
bool pending_;
bool terminated_;

interprocess_condition cEmpty_;
interprocess_condition cFull_;

public:
SharedMessage() : pending_(false), terminated_(false) {}

void sendMessage(const char* id, int i) // 2.
{
scoped_lock<interprocess_mutex> lock(mutex_);
if(pending_)
cFull_.wait(lock);

std::sprintf(message_, "%s_%d", id, i);
pending_ = true;
cEmpty_.notify_one();
}

void terminate() // 3.
{
scoped_lock<interprocess_mutex> lock(mutex_);
if(pending_)
cFull_.wait(lock);

terminated_ = true;
pending_ = true;

cEmpty_.notify_one();
}

bool readMessage(std::string& buffer) // 4.
{
scoped_lock<interprocess_mutex> lock(mutex_);
if(!pending_)
cEmpty_.wait(lock);

if(terminated_)
return false; // no message read

buffer = message_;
pending_ = false;
cFull_.notify_one();

return true;
}
};

class SMManager // 5.
{
private:
std::string name_;
shared_memory_object shm_;
mapped_region region_;
SharedMessage* sm_;

void remove() { shared_memory_object::remove(name_.c_str()); }
public:
SMManager(const char* name, bool create) : name_(name)
{
if(create)
{
remove();

shared_memory_object shm(create_only, name, read_write);
shm.truncate(sizeof(SharedMessage));
shm_.swap(shm);
}
else
{
shared_memory_object shm(open_only, name, read_write);
shm_.swap(shm);
}

mapped_region region(shm_, read_write);
region_.swap(region);

void* addr = region_.get_address();
sm_ = create ? new (addr) SharedMessage : static_cast<SharedMessage*>(addr);
}

~SMManager() { remove(); }

SharedMessage* getSharedMessage() { return sm_; }
};
}

void ip09a(const char* id) // 6.
{
std::cout << "Starting producer process" << std::endl;

try
{
SMManager smm(MY_SHARED, true);
SharedMessage* pSM = smm.getSharedMessage();

for(int i = 0; i < 7; ++i)
pSM->sendMessage(id, i);
pSM->terminate();
}
catch(interprocess_exception &ex)
{
std::cout << ex.what() << std::endl;
return;
}

std::cout << "Execution completed" << std::endl;
}

void ip09b() // 7.
{
std::cout << "Starting consumer " << std::endl;

try
{
SMManager smm(MY_SHARED, false);
SharedMessage* pSM = smm.getSharedMessage();

std::string message;
while(pSM->readMessage(message))
std::cout << "Message received: " << message << std::endl;
}
catch(interprocess_exception &ex)
{
std::cout << ex.what() << std::endl;
return;
}

std::cout << "Execution completed" << std::endl;
}

1. the SharedMessage class manages the buffer in the shared memory. Basically we have a mutex to rule the access to the shared resources and a couple of conditions to let the processes to communicate between them the change of status. A boolean, pending_, is used to keep track of the current status of the buffer and another one, terminated_, to signal the consumer when the producer has completed its operations.
2. SharedMessage.sendMessage() is used by the producer to put a message in the shared memory for the consumer. A scoped lock is created, than we check for the status variable, if there is already a message in the buffer, the process waits on the lock through the condition variable cFull_. After acquiring the rightful access, we put the message in the shared memory, set the status boolean, and then notify to the condition cEmpty that, well, message is not empty anymore.
3. SharedMessage.terminate() puts a different message in the shared memory, the one saying that the producer is done with producing messages. Instead of putting a string in the buffer we just change the boolean terminated_ setting it to true.
4. SharedMessage.readMessage() is used by the consumer to read the message stored by the producer in the shared memory, but before reading the buffer we check if actually the producer has ended its message production, looking the terminated_ flag. Notice that the condition variable usage mirrows the one of the sendMessage() function.
5. the SMManager class takes care of the shared memory management. Its constructor has a paramenter, create, that let us use it for the producer, that actually allocates the shared memory, and the consumer, that just accesses it. The shared memory is associated to a SharedMessage object, the producer use the new placement to call the SharedMessage constructor without allocating memory, we just use the shared memory that we already have available, the consumer just cast the memory to a pointer to SharedMessage.
6. This is the function used to implement the producer functionality. It creates a SMManager object - to allocate shared memory and create a SharedMessage object - and then sends a few messages.
7. Here we see the logic of the consumer. It creates a SMManager object - to gain access to the shared memory allocated by the producer, seeing it as a SharedMessage object - and then reads messages until it finds that the producer has completed its job.

The code is based on an example provided by the Boost Library Documentation.

Go to the full post

Named mutex

A named mutex could be used in IPC context to synchronize processes on a file. The code showed in the example puts on a file a string that include the thread id of the current process and a counter. If we execute it in different processes, the strings could be mixed up, so we use a lock on the named mutex to let any process having exclusive access to the file when writing.

Here is the code:
#include <fstream>
#include <iostream>
#include <string>

#include "boost/interprocess/sync/scoped_lock.hpp"
#include "boost/interprocess/sync/named_mutex.hpp"
#include "boost/thread/thread.hpp"

using namespace boost::interprocess;

namespace
{
class FileManager // 1.
{
private:
named_mutex mutex_;
std::ofstream file_;

public:
FileManager() : mutex_(open_or_create, "fstream_named_mutex"),
file_("boost.log", std::ios_base::app) {}
~FileManager() { named_mutex::remove("fstream_named_mutex"); }

void log(std::string line)
{
scoped_lock<named_mutex> lock(mutex_);
file_ << line << std::endl;
}
};
}

void ip08()
{
try
{
FileManager fm;

for(int i = 0; i < 10; ++i)
{
std::cout << '.'; // a sign of life for the user ...
std::ostringstream os("Process id ");
os << boost::this_thread::get_id() << " iteration # " << i;

fm.log(os.str());
boost::this_thread::sleep(boost::posix_time::milliseconds(1000)); // 2.
}
std::cout << std::endl;
}
catch(interprocess_exception &ex)
{
std::cout << ex.what() << std::endl;
return;
}
return;
}

1. through the class FileManager we manage the file stream and the associated mutex, the constructor open the file, in append mode, and create or open the named mutex we use to rule the access to the file. The destructor removes the mutex. The log() methods use a scoped lock on the mutex to safely perform the write to the file.
2. a sleep is used to let the user interact to the execution.

The code is based on an example provided by the Boost Library Documentation.

Go to the full post

Anonymous mutex

The application we are going to use to show how to use anonymous mutex is build around a simple cyclic buffer that resides in shared memory and it is used by two different processes. An interprocess_mutex, defined in the Boost IPC library, is used to rule the access to the shared resources through a scoped_lock.

To create different processes we call the executable with no parameter - for the master process - and then with a parameter, the name of the secondary process.

So, the main of our application would distinguish among the two different cases, calling the appropriate function, using a piece of code like that:
if(argc == 1)
  ip07a();
else
  ip07b(argv[1]);
Let's now have a look at the complete code, than we'll say something about the most interesting passages:
#include <cstdio>
#include <iostream>

#include "boost/interprocess/sync/interprocess_mutex.hpp"
#include "boost/interprocess/sync/scoped_lock.hpp"
#include "boost/interprocess/shared_memory_object.hpp"
#include "boost/interprocess/mapped_region.hpp"
#include "boost/thread/thread.hpp"

using namespace boost::interprocess;

namespace
{
const char* MY_SHARED = "MySharedMemory";

class SharedMemoryLog // 1
{
private:
  enum { NUM_ITEMS = 10, LINE_SIZE = 100 };
  boost::interprocess::interprocess_mutex mutex_;

  char items[NUM_ITEMS][LINE_SIZE];
  int curLine_;
  bool done_;
public:
  SharedMemoryLog() : curLine_(0), done_(false) {}

  void push_line(const char* id, int index)
  {
    scoped_lock<interprocess_mutex> lock(mutex_);
    std::sprintf(items[(curLine_++) % SharedMemoryLog::NUM_ITEMS], "%s_%d", id, index);
    std::cout << "Inserting item " << id << ' ' << index << std::endl;
  }

  void dump()
  {
    scoped_lock<interprocess_mutex> lock(mutex_);
    for(int i = 0; i < NUM_ITEMS; ++i)
      std::cout << items[i] << std::endl;
  }

  void done()
  {
    scoped_lock<interprocess_mutex> lock(mutex_);
    done_ = true;
  }

  bool isDone()
  {
    scoped_lock<interprocess_mutex> lock(mutex_);
    return done_;
  }
};

class ShMemManager // 2
{
private:
  std::string name_;
  bool create_;
  shared_memory_object shm_;
  mapped_region region_;
  SharedMemoryLog* sml_;

  void remove() { shared_memory_object::remove(name_.c_str()); }
public:
  ShMemManager(const char* name, bool create = true) : name_(name), create_(create)
  {
    if(create_)
    {
      remove();

      shared_memory_object shm(create_only, name_.c_str(), read_write);
      shm.truncate(sizeof(SharedMemoryLog));
      shm_.swap(shm);
    }
    else
    {
      shared_memory_object shm(open_only, name_.c_str(), read_write);
      shm_.swap(shm);
    }

    mapped_region region(shm_, read_write);
    region_.swap(region);
    void* addr = region_.get_address();

    sml_ = create_ ? new (addr) SharedMemoryLog : static_cast<SharedMemoryLog*>(addr);
  }

  ~ShMemManager() { remove(); }

  SharedMemoryLog* getMemory() { return sml_; }
};
}

void ip07a() // 4
{
  std::cout << "Starting master process ..." << std::endl;

  try
  {
    ShMemManager smm(MY_SHARED);
    SharedMemoryLog* data = smm.getMemory();

    for(int i = 0; i < 7; ++i)
    {
      data->push_line("master", i);
      boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
    }

    std::cout << "Master dumps data:" << std::endl;
    data->dump();

    while(true)
    {
      if(data->isDone())
      {
        std::cout << "Master sees that the other process is done" << std::endl;
        break;
      }

      std::cout << "Master waits for the other process" << std::endl;
      boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
    }

    std::cout << "Master dumps again the data:" << std::endl;
    data->dump();
  }
  catch(interprocess_exception& ex)
  {
    std::cout << ex.what() << std::endl;
    return;
  }

  std::cout << "Master execution completed" << std::endl;
}

void ip07b(const char* id) // 5
{
  std::cout << "Process " << id << " started" << std::endl;

  try
  {
    ShMemManager smm(MY_SHARED, false);
    SharedMemoryLog* data = smm.getMemory();

    for(int i = 0; i < 7; ++i)
    {
      data->push_line(id, i);
      boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
    }
    data->done();

    std::cout << id << " dumps data:" << std::endl;
    data->dump();
  }
  catch(interprocess_exception& ex)
  {
    std::cout << ex.what() << std::endl;
    return;
  }

  std::cout << "Process " << id << " done" << std::endl;
}
1. the class SharedMemoryLog manages the concurrent access by the differnt processes, and it would be associated to memory placed in shared memory. Notice that any method is shielded by scoped_lock on the mutex owned by the class.
2. the class ShMemManager is used to manage the shared memory. A parameter in the constructor let us to determine if we want call it to actually create the shared memory - that would be the usage for the master process - or just to read it - for the secondary process.
3. the last line of the ShMemManager constructor associate the sml_ pointer to the SharedMemoryLog class to the shared memory we have just created or accessed. If we are in creation mode, we should actually call the constructor for the SharedMemoryLog asking it to use the shared memory. To do that we use the so called placement new construct "new (addr) SharedMemoryLog" specifying the memory address we should use. Otherwise we simply perform a static cast to the require type.
4. the function called from the master process. It just puts a few lines in the log (slowing down the process with a sleep call), dumps the log, stays in busy wait for the other process to complete, then performs another dump before returning. This busy wait is not very good programming, we should use a condition instead. We'll see how to do that in a next post.
5. the function called by the secondary process. The main diffences to the master is that we call the constructor for the ShMemManager specifying that we want to access shared memory already available; and then we let the master knowing we are done callint the function done() that sets an internal flag. As already said, this is not a very clean way of working, we'll see how to do better using a condition.

The code is based on an example provided by the Boost Library Documentation.

Go to the full post

File_mapping

Using the Boost IPC library, it is possible to associate a file content with shared memory, that gives a number of advantages, like delegating to the OS all the trouble connected with data sync and caching.

By the way, sometimes could be useful to use file_mapping even if we are not interested in sharing memory, but just to simplify the data management on a file.

Here is an example:

#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <cstring>
#include <cstddef>
#include <cstdlib>
#include "boost/interprocess/file_mapping.hpp"
#include "boost/interprocess/mapped_region.hpp"

using namespace boost::interprocess;

namespace
{
const std::size_t FILE_SIZE = 10000;
const char* FILE_NAME = "file.bin";

class FileManager // 1.
{
public:
enum Mode { create_remove, create, remove, access };
FileManager(const char* fname, Mode mode) : fname_(fname), mode_(mode)
{
if(mode == create_remove || mode == create)
{
std::filebuf fbuf;
fbuf.open(FILE_NAME, std::ios_base::in | std::ios_base::out
| std::ios_base::trunc | std::ios_base::binary);
//Set the size
fbuf.pubseekoff(FILE_SIZE-1, std::ios_base::beg);
fbuf.sputc(0);
}
}

mapped_region getMappedRegion(mode_t mode)
{
//Create a file mapping
file_mapping m_file(fname_.c_str(), mode);

//Map the whole file with read-write permissions in this process
return mapped_region(m_file, mode);
}

~FileManager()
{
if(mode_ == remove || mode_ == create_remove)
{
file_mapping::remove(fname_.c_str());
std::cout << "File " << fname_ << " removed" << std::endl;
}
}
private:
std::string fname_;
Mode mode_;
};

// ensure the executable file name is quoted in case it has internal blanks
bool launchChildProcess(const char* progName)
{
bool quote = strchr(progName, ' ') == 0 ? false : true;
std::string s((quote ? "\"" : ""));
s += progName;
if(quote)
s += "\"";
s += " child";

if(std::system(s.c_str()) != 0)
{
std::cout << "error in the child process" << std::endl;
return false;
}
return true;
}

void checkMemoryOne(void* address, std::size_t size)
{
const char* mem = static_cast<const char*>(address);
for(std::size_t i = 0; i < size; ++i)
{
if(*mem++ != 1)
{
std::cout << "Memory check failed" << std::endl;
return;
}
}
std::cout << "Memory check succeeded" << std::endl;
}

void sharedMemoryAccess() // 2.
{
FileManager fm(FILE_NAME, FileManager::access);
mapped_region region = fm.getMappedRegion(read_only);

//Get the address of the mapped region
void* addr = region.get_address();
std::size_t size = region.get_size();

checkMemoryOne(addr, size);
}

void fileAccess() // 3.
{
std::filebuf fbuf;
fbuf.open(FILE_NAME, std::ios_base::in | std::ios_base::binary);

//Read it to memory
std::vector<char> vect(FILE_SIZE, 0);
fbuf.sgetn(&vect[0], std::streamsize(vect.size()));

checkMemoryOne(&vect[0], FILE_SIZE);
}
}

// parent process
void ip06a(const char* progName)
{
FileManager fm(FILE_NAME, FileManager::create_remove);
mapped_region region = fm.getMappedRegion(read_write);

//Get the address of the mapped region
void* addr = region.get_address();
std::size_t size = region.get_size();

//Write all the memory to 1
std::memset(addr, 1, size);

launchChildProcess(progName);
}

// child process
void ip06b()
{
std::cout << "Accessing the shared memory" << std::endl;
sharedMemoryAccess();

std::cout << "Accessing the file on disk" << std::endl;
fileAccess();
}

1. the class FileManager is just a little wrapper to the basic functionality for the file access. Its constructor keeps track of the associate file name and the way we want to access it. Only if we want to actually create it, the filebuf functionality are called to access the file, otherwise, if we just want to access the data (or remove the file) we rely on the fact that someone else should have already crete the file. The getMappedRegion() method performs the mapping between shared memory and file, and return the mapped region, in the mode we require. The destructor remove the file, only when required.
2. the function sharedMemoryAccess() shows how to work with the mapped_region.
3. as comparison, the function fileAccess() performs the same action of sharedMemoryAccess(), but accessing directly the file.

The main for this application, basically just calls the parent or the child function accordingly to the passed parameter:
if(argc == 1)
ip06a(argv[0]);
else
ip06b();

The code is based on an example provided by the Boost Library Documentation.

Go to the full post

Shared_memory_object

As far as I know, the new C++ standard (C++0x, still a draft when I'm writing) does not bring much in the interprocess communication (IPC) field. I guess the problem is that almost any environment provides its own native way to approach the issue, and at this point it is not easy to find a solution that would make anyone happy.

Luckily Boost provides an interprocess library that is vary useful in keeping the code as portable as possible.

Here is a first example that shows how to manage shared memory among different processes.

We create two processes. A first one takes care of allocating the shared memory, then it spawns a new process that accesses the shared memory, and does something with it before terminating. Then the parent process gets back in control, removes the shared memory and terminates.

This is the main of our application:
int main(int argc, char* argv[])
{
  if(argc == 1)
    ip05a(argv[0]);
  else
    ip05b();

  system("pause");
}
If we call the executable without parameters, the only argument passed to main() is the executable name, and we use this information to consider this process as the parent one for our application. Otherwise we assume to be in the child process.

This is the implementation for the two functions:
#include <cstring>
#include <cstdlib>
#include <string>
#include <iostream>
#include "boost/interprocess/shared_memory_object.hpp"
#include "boost/interprocess/mapped_region.hpp"

using namespace boost::interprocess;

namespace
{
  const char* MY_SHARED = "MySharedMemory";
  const size_t MY_SHARED_SIZE = 1000;

  // Remove shared memory on construction and destruction
  class Remover
  {
  private:
    std::string name_;
    void remove() { shared_memory_object::remove(name_.c_str()); }
  public:
    Remover(const char* name) : name_(name) { remove(); }
    ~Remover() { remove(); }
  };

  // ensure the executable file name is quoted in case it has internal blanks
  bool launchChildProcess(const char* progName)
  {
    bool quote = strchr(progName, ' ') == 0 ? false : true;
    std::string s((quote ? "\"" : ""));
    s += progName;
    if(quote)
      s += "\"";
    s += " child";

    if(std::system(s.c_str()) != 0)
    {
      std::cout << "error in the child process" << std::endl;
      return false;
    }
    return true;
  }
}

void ip05a(const char* progName)
{
  try
  {
    Remover remover(MY_SHARED); // 1
    shared_memory_object shm(create_only, MY_SHARED, read_write); // 2
    shm.truncate(MY_SHARED_SIZE); // 3
    mapped_region region(shm, read_write); // 4
    std::memset(region.get_address(), 1, region.get_size()); // 5

    launchChildProcess(progName);
  }
  catch(interprocess_exception& ie)
  {
    std::cout << "can't create the shared memory: " << ie.what() << std::endl;
  }
}

void ip05b()
{
  try
  {
    shared_memory_object shm(open_only, MY_SHARED, read_only); // 6
    mapped_region region(shm, read_only);
    std::cout << "Working on a mapped region with size " << region.get_size() << std::endl;

    // do something
    char* mem = static_cast<char*>(region.get_address());
    for(std::size_t i = 0; i < region.get_size(); ++i)
    {
      if(*mem++ != 1)
      {
        std::cout << "unexpected value in the shared memory" << std::endl;
        return;
      }
    }

    std::cout << "shared memory read correctly" << std::endl;
  }
  catch(interprocess_exception& ie)
  {
    std::cout << "can't work on the shared memory: " << ie.what() << std::endl;
  }
}
  1. We create Remover object in order to ensure that the shared memory is correctly destroyed when we left the parent process. The object calls shared_memory_object::remove() for the specified name both on construction and deletion of the object. The call on construction could be seen as an overkilling, but it is cheap, and saves us some trouble in case for any reason the shared memory allocated from a previous execution is unexpectedly still there.
  2. We create an instance of shared_memory_object passing the flag create_only so, if already exists an object in shared memory with the passed name, an exception is raised. Besides, we specify the read_write access mode since we actually want to modify the associated memory.
  3. The shared_memory_object.truncate() method is used to set the size for the object.
  4. Mapped_region maps a shared_memory_object in a region, making the memory available to the current process. We specify here too that we want access the memory in read_write mode.
  5. The mapped_region memory location is accessed through get_address() and get_size().
  6. We create an instance of shared_memory_object passing the flag open_only so, if does not exist yet an object in shared memory with the passed name, an exception is raised. Besides, we specify the read_only access mode since we want just to read the associated memory.
The code is based on an example provided by the Boost Library Documentation.

Go to the full post

boost::lambda::placeholder and more

The C++0x simplify a lot the usage of the lambda expression still, if you are using a compiler that does not support it yet, it useful to know a couple of tricks that could help us in keeping the code more readable.

For instance, using constant_type<>::type could we define constant to be used in our expression, besides, we see here also how to redefine the name for the placeholders:

#include <iostream>
#include <vector>
#include <algorithm>

#include "boost/lambda/lambda.hpp"
#include "boost/lambda/bind.hpp"
#include "boost/function.hpp"

using std::cout;
using std::endl;

namespace
{
template <typename T, typename Operation>
void for_all(T& t, Operation op) { std::for_each(t.begin(), t.end(), op); }

template<typename T>
void lambdaBoost(const T& t)
{
using namespace boost::lambda;

constant_type<char>::type space = constant(' ');
boost::lambda::placeholder1_type _;

boost::function<void(T::value_type)> f = cout << _ << space;
for_all(t, f);
cout << endl;
}

template<typename T>
void lambda0x(const T& t)
{
auto f = [] (T::value_type x) { cout << x << ' '; } ;
for_all(t, f);
cout << endl;
}
}

void lambda04()
{
std::vector<int> v;
for(int i = 0; i < 5; ++i)
v.push_back(i);

lambdaBoost(v);
lambda0x(v);
}


For more information on boost lambda you could read "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book.

Go to the full post

The use of ret in a boost lambda expression

Another nuisance in the usage of the boost lambda expression is that the compiler could be get confused in the translating of the expression and can't get the correct return type from a token. The more annoying fact in this is that we get an error message that is incredibly long and quite difficult to understand. But at least we should find out that the problem comes from a 'sig' symbol and from deducing argument types.

The problem could look obscure, at least the first time, but the solution is easy: just explicitly tell the compiler the return type it should expect using the ret<>() function, or its shortcut bind<>().

The good news is that this problem is no more, when using the C++0x implementation.

Let's see an example:

#include <iostream>

#include "boost/lambda/lambda.hpp"
#include "boost/lambda/bind.hpp"

using namespace boost::lambda;

using std::cout;
using std::endl;

class Doubler
{
public:
int operator()(int i) const { return i*2; }
};

void lambda03()
{
Doubler d;

// this doesn't work:
//(cout << _1 << " * 2 = " << (bind(d, _1)) << '\n')(12);

(cout << _1 << " * 2 = " << (ret<int>(bind(d, _1))) << '\n')(12);
(cout << _1 << " * 2 = " << (bind<int>(d, _1)) << '\n')(12);

[d] (int j) { cout << j << " * 2 = " << d(j) << endl; } (12);
}

The commented boost lambda expression just does not compile. We should help the compiler to figure out the type returned by the token involving the binding to the Doubler functor, using what it is showed in the next line, by the ret<>() function or, more concisely, we could use the bind<>() function, that combines binding to the return value identification.

And, when we can, using the C++0x implementation could save us a lot of head scratching.

More information on boost lambda and ret<> in "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book.

Go to the full post

Constants inside a lambda expression

Using constants inside a boost lambda expression is a kind a nuisance. And also calling a member for a variable passed to the lambda expression is not so smooth as it could be.

This is not an issue anymore, if you are lucky enough to use a compiler that supports the C++0x TR1. But if this is not the case, well, just be patient.

Here is an example that shows the difference between the two implementations:

#include <iostream>
#include <string>
#include <map>
#include <algorithm>

#include "boost/lambda/lambda.hpp"
#include "boost/lambda/bind.hpp"

using std::cout;
using std::endl;

namespace
{
typedef std::map<int, std::string> AMap;

void lambdaBoost(const AMap& aMap)
{
using namespace boost::lambda;

cout << "boost lambda nullary functor: constant()\n";
for_each(aMap.begin(), aMap.end(),
cout << constant("key = ") << bind(&AMap::value_type::first, _1)
<< ", value = " << bind(&AMap::value_type::second, _1) << '\n');

// Print the size and max_size of the container
(cout << "size is = " << bind(&AMap::size, _1)
<< "\nmax_size is = " << bind(&AMap::max_size, _1) << '\n')(aMap);
}

void lambdaOx(const AMap& aMap)
{
cout << endl << "C++0x lambda makes it easier" << endl;
typedef AMap::value_type AMType;
auto f1 = [] (AMType at) { cout << "key = " << at.first << ", value = " << at.second << endl; };
for_each(aMap.begin(), aMap.end(), f1);

[aMap] { cout << "size is = " << aMap.size() << endl << "max_size is = " << aMap.max_size() << endl; } ();
}
}

void lambda02()
{
AMap aMap;
aMap[3] = "Less than pi";
aMap[42] = "You tell me";
aMap[0] = "Nothing, if you ask me";

lambdaBoost(aMap);
lambdaOx(aMap);
}

As we see, the boost implementation of lambda requires our collaboration, since we have to use the constant() function to mark at least the first string used in the expression as a constant, and we should explicitly bind() the variable to access a member of it.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

The lambda expression

A lambda expression is a sort of function, that is declared in the body of another function and that could also be used in a boost::function (and even a std::function, if your compiler already support the C++0x TR1 ).

Here is a simple example that shows how to use a lambda expression, alone and in combination with function, using the boost libraries:

#include <iostream>

#include "boost/lambda/lambda.hpp"
#include "boost/function.hpp"

using namespace boost::lambda;
using std::cout;
using std::endl;

void boostLambda()
{
cout << "boost::lambda says hello ..." << endl;

(cout << _1 << " " << _3 << " " << _2 << "!\n") ("Hello", "friend", "my");

boost::function<void(int,int,int)> f =
cout << _1 << "*" << _2 << "+" << _3 << " = " << _1*_2+_3 << "\n";

f(1, 2, 3);
f(3, 2, 1);
}

The boost implementation is a bit different from the standard one, but it works just in the same way:

#include <iostream>
#include <functional>

using std::cout;
using std::endl;

void stdLambda()
{
cout << endl << "C++0x lambda says hello ..." << endl;

[] (char* a, char* b, char* c) { cout << a << ' ' << c << ' ' << b << '!' << endl; }("Hello", "friend", "my");

std::function<void(int,int,int)> f =
[] (int a, int b, int c) {cout << a << "*" << b << "+" << c << " = " << a*b+c << endl;};

f(1, 2, 3);
f(3, 2, 1);
}

The standard implementation is a bit more verbose than the boost one or we could say that the standard way is a bit more explicit.

In my opinion the standard version is more readable, but this is just a matter of taste.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

boost::function and the command pattern

You know the command pattern, a very useful abstraction indeed. For instance we could use it to implement a fake tape recorder accordingly to this design:

We have a class, TapeRecorder, that implements the recorder itself, giving us access to its functionality.

Than we have a pure virtual base class, CommandBase, that would be base for each actual class implementing a recorder command, like PlayCommand, StopCommand, etc.

Given that, our GUI, or whatever interface we provide to the user, could access the commands on the recorder through the Command classes.

Here is an implementation of this schema:
#include <iostream>
#include <string>

using std::cout;
using std::endl;
using std::string;

class TapeRecorder
{
public:
  void play() { cout << "Since my baby left me..." << endl; }
  void stop() { cout << "OK, taking a break" << endl; }
  void forward() { cout << "whizzz" << endl; }
  void rewind() { cout << "zzzihw" << endl; }
  void record(const string& sound) { cout << "Recorded: " << sound << endl; }
};

class CommandBase
{
public:
  virtual bool enabled() const =0;
  virtual void execute() =0;

  virtual ~CommandBase() {}
};

class PlayCommand : public CommandBase
{
  TapeRecorder* p_;
public:
  PlayCommand(TapeRecorder* p) : p_(p) {}

  bool enabled() const { return true; }
  void execute() { p_->play(); }
};

class StopCommand : public CommandBase
{
  TapeRecorder* p_;
public:
  StopCommand(TapeRecorder* p) : p_(p) {}
  bool enabled() const { return true; }
  void execute() { p_->stop(); }
};

void commandPattern()
{
  cout << "Using the command pattern" << endl;

  TapeRecorder tr;

  CommandBase* pPlay = new PlayCommand(&tr);
  CommandBase* pStop = new StopCommand(&tr);

  cout << "Pressing button play" << endl;
  pPlay->execute();
  cout << "Pressing button stop" << endl;
  pStop->execute();

  delete pPlay;
  delete pStop;
}
Sometimes we don't have an alternative to this, especially if any command needs to be managed in its peculiar way, but in a case like this, having a specific class for each real command is just overkilling.

We can think of a generic Command class that would be initialized with the correct command, in this way:
class TapeRecorderCommand : public CommandBase
{
  void (TapeRecorder::*func_)(); 
  TapeRecorder* p_;
public:
  TapeRecorderCommand(TapeRecorder* p, void (TapeRecorder::*func)()) : p_(p), func_(func) {}

  bool enabled() const { return true; }
  void execute() { (p_->*func_)(); }
};
Now our user code would look in this way:
void commandPatternImproved()
{
  cout << endl << "Using the improved command" << endl;

  TapeRecorder tr;

  CommandBase* pPlay = new TapeRecorderCommand(&tr, &TapeRecorder::play);
  CommandBase* pStop = new TapeRecorderCommand(&tr, &TapeRecorder::stop);

  cout << "Pressing button play" << endl;
  pPlay->execute();
  cout << "Pressing button stop" << endl;
  pStop->execute();

  delete pPlay;
  delete pStop;
}
So, we just use the TapeRecorderCommand for all the commands, simplifying the hierarchy.

Well, at this point we could even get rid of all the hierarchy and just use a class that would store a boost::function, in this way:
class Command2
{
  boost::function<void()> f_;
public:
  Command2() {}
  Command2(boost::function<void()> f) : f_(f) {}

  void execute() { if(f_) f_(); }
  template <typename Func> void setFunction(Func f) { f_ = f; }
  bool enabled() const { return f_; }
};
Class that would be used in this way:
void commandPatternBoost()
{
  cout << endl << "Using boost::function and boost::bind" << endl;

  TapeRecorder tr;

  Command2 play(boost::bind(&TapeRecorder::play,&tr));
  Command2 stop(boost::bind(&TapeRecorder::stop,&tr));
  Command2 forward(boost::bind(&TapeRecorder::stop,&tr));
  Command2 rewind(boost::bind(&TapeRecorder::rewind,&tr));
  Command2 record;

  cout << "Pressing button play" << endl;
  if (play.enabled())
    play.execute();

  cout << "Pressing button stop" << endl;
  stop.execute();

  cout << "Pressing button record" << endl;
  string s = "What a beautiful morning...";
  record.setFunction(boost::bind(&TapeRecorder::record, &tr, s));
  record.execute();
}
Notice that we combine the usage of boost::function with boost::bind, since we want to use, as a function, the member function of a specific class object. It is a pleasure to see how smoothly they work together.

But do we really need the Command class to wrap the boost::function object? Actually no, we don't. All the functionality we require from that class are already available in the boost class. We could just use it, maybe making its usage a bit more explicit with a typedef:
void commandPatternBoost2()
{
  cout << endl << "Fully using boost::function" << endl;

  TapeRecorder tr;

  typedef boost::function<void()> BoostCommand;
  BoostCommand play(boost::bind(&TapeRecorder::play, &tr));
  BoostCommand stop(boost::bind(&TapeRecorder::stop, &tr));
  BoostCommand forward(boost::bind(&TapeRecorder::stop, &tr));
  BoostCommand rewind(boost::bind(&TapeRecorder::rewind, &tr));

  cout << "Pressing button play" << endl;
  play();

  cout << "Pressing button stop" << endl;
  stop();

  cout << "Pressing button record" << endl;
  string s = "What a beautiful morning...";
  BoostCommand record(boost::bind(&TapeRecorder::record, &tr, s));
  record();
}
Cool, isn't it?

And if you are using a compiler that already supports the C++0x TR1, you could use std::function and std::bind exactly in the same way:
void commandPatternCpp0x()
{
  cout << endl << "Using std::function" << endl;

  TapeRecorder tr;

  typedef std::function<void()> Command0x;

  Command0x play(std::bind(&TapeRecorder::play, &tr));
  Command0x stop(std::bind(&TapeRecorder::stop, &tr));
  Command0x forward(std::bind(&TapeRecorder::stop, &tr));
  Command0x rewind(std::bind(&TapeRecorder::rewind, &tr));

  cout << "Pressing button play" << endl;
  play();

  cout << "Pressing button stop" << endl;
  stop();

  cout << "Pressing button record" << endl;
  string s = "What a beautiful morning...";
  Command0x record(std::bind(&TapeRecorder::record, &tr, s));
  record();
}
The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

boost::function for functors

It is possible to pass a functor to boost::function, but we should pay some attention in its usage, since the functor is accepted by copy. That means there is no relation between the original functor and the one internally used by the boost::functor object.

Fortunately there is a way to let the boost::functor object to accept a reference to the passed functor, and this requires the usage of boost::ref.

To appreciate the difference between the direct usage of a functor and the one mediated by boost::ref see this example:

#include <iostream>
#include "boost/function.hpp"

using boost::function;
using boost::ref;
using std::cout;
using std::endl;

namespace
{
class KeepingState
{
int total_;
public:
KeepingState() : total_(0) {}

int operator()(int i)
{
total_ += i;
return total_;
}

int total() const
{
return total_;
}
};
}

void function04()
{
KeepingState ks;
function<int(int)> f1;
f1 = ks;

function<int(int)> f2;
f2 = ks;

cout << "Default: functor copied" << endl;
cout << "The current total is " << f1(10) << endl; // 10
cout << "The current total is " << f2(10) << endl; // 10
cout << "The total is " << ks.total() << endl; // 0

cout << "Forcing functor by reference" << endl;
f1 = ref(ks);
f2 = ref(ks);

cout << "The current total is " << f1(10) << endl; // 10
cout << "The current total is " << f2(10) << endl; // 20
cout << "The total is " << ks.total() << endl; // 20
}

The idea is having an instance of a functor (class KeepingState) and passing it to two different boost::function objects. We can check how there is no relation among the three object ks, f1 and f2.

But if we assign the ks object to both f1 and f2 via boost::ref we see how actually always the same object ks is modified by using f1 and f2.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

std::function for member functions

We can use std::function to work with a member function. We should just specify the class type for the input function in the template for the std::function. In a way, that is an explicit representation of the "this" pointer that would be used internally to access the relative object.

Since it is possible to pass the relative object in three ways (by value, by reference, by pointer) we should also specify that information in the std::function declaration.

If your compiler does not support std::function yet, you could get the same result using boost::function - just changing a couple of lines in the examples:
#include <iostream>

// boost version
//#include "boost/function.hpp"
//using boost::function;

// C++0x version
#include <functional>
using std::function;

namespace
{
   class AClass
   {
   public:
      void doStuff(int i) const
      {
         std::cout << "Stuff done: " << i << std::endl;
      }
   };
}

void function03()
{
   std::cout << "Member function, class object by value" << std::endl;
   function<void(AClass, int)> f1;
   f1 = &AClass::doStuff;
   f1(AClass(), 1);

   std::cout << "Member function, class object by reference" << std::endl;
   function<void(AClass&, int)> f2;
   f2 = &AClass::doStuff;
   AClass ac2;
   f2(ac2, 2);

   std::cout << "Member function, class object by pointer" << std::endl;
   function<void(AClass*, int)> f3;
   f3 = &AClass::doStuff;
   AClass ac3;
   f3(&ac3, 3);
}
The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

std::function for functors

The std::function and boost::function work in the same way. So, if your compiler does not support C++0x yet, you could get exactely the same behaviour shown here using boost. Just remeber to include the required header file:
#include "boost/function.hpp"

The point of this post is showing how using std::function we can do all what we can do using function pointers, but the viceversa is not true. For instance, std::function could accept functor as argument.

Let's say we want to develop a class, Notifier, to call back the functions we register on it any time its state change.

A first implementation uses a vector of function pointers. And it works fine.

The problem is if we want to use functors, too. The reason is quite clear, a functor gives us more power, since it could keeps its own internal status.

But that requires we redesign our class, using std::function instead. We call it NotifierExt. We see that the change in the code is minimal, and the advantage is self evident, I think.

Here is the code:

#include <iostream>
#include <vector>
#include <functional>

namespace
{
void printNewValue(int i)
{
std::cout << "The value has been updated and is now " << i << std::endl;
}

void changeObserver(int i)
{
std::cout << "Ah, the value has changed!" << std::endl;
}

class PrintPreviousValue
{
int lastValue_;
public:
PrintPreviousValue() : lastValue_(-1) {}

void operator()(int i)
{
std::cout << "Previous value was " << lastValue_ << std::endl;
lastValue_ = i;
}
};

class Notifier
{
typedef void (*FunType)(int);
std::vector<FunType> vec_;
int value_;
public:
void addObserver(FunType t)
{
vec_.push_back(t);
}

void changeValue(int i)
{
value_ = i;
for(size_t i=0; i < vec_.size(); ++i)
vec_[i](value_);
}
};

class NotifierExt
{
typedef std::function<void(int)> FunType;
std::vector<FunType> vec_;
int value_;
public:
void addObserver(FunType t)
{
vec_.push_back(t);
}

void changeValue(int i)
{
value_ = i;
for(size_t i = 0; i < vec_.size(); ++i)
vec_[i](value_);
}
};
}

void function02()
{
std::cout << "Using function pointer" << std::endl;
Notifier n;
n.addObserver(&printNewValue);
n.addObserver(&changeObserver);

n.changeValue(42);

std::cout << "Using std::function" << std::endl;
NotifierExt ne;
ne.addObserver(&printNewValue);
ne.addObserver(&changeObserver);
ne.addObserver(PrintPreviousValue());

ne.changeValue(42);
ne.changeValue(39);
}


The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

boost::function - std::function

The boost::function and the std::function, introduced by the C++0x technical report 1, are used to generalize, and make more usable, the concept of pointer to function. Besides, the function class so defined add more functionality to the pointer to function construct, giving the chance of adding a state to the function, if this is required.

In this first example, we see how to create a function object referring to a function, and how to use it:

#include <iostream>
#include <functional> // std::function
#include "boost/function.hpp" // boost::function

using std::cout;
using std::endl;
using std::function;

namespace
{
bool check(int i, double d)
{
return i > d;
}
}

void function01()
{
boost::function<bool (int, double)> fb = ✓

function<bool (int, double)> fs = ✓

if(fb(10, 1.1))
cout << "Boost function works as expected" << endl;

if(fs(10, 1.1))
cout << "std function works as expected" << endl;
}

In boost and in C++0x the class function looks just the same. In the template parameters we put the return type and, in round brackets, the parameter types.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

rand

Probably the simplest way of generating a pseudo-random sequence is using the couple of functions srand() and rand(), part of the C standard library.

Their usage is quite strightforward. We use srand() to select a specific sequence, and then we call rand() any time we want the next value in our pseudo-random sequence.

We immediately see that the weakest link in this way of proceeding is the startup. If we always feed srand() with the same seed, we always ends up with the same sequence, and that is not usually the expected behaviour.

But for this problem there is a easy solution: we use as a seed the current time. Usually this is enough to guarantee a certain variety in the results.

There is another caveat, though. If you are using VC++, the first number generated by rand() tends to change only slightly as answer to the usage of increasing seed for srand() . Looks like this is a sort of bug, but I didn't investigate much on the matter, since there is a really immediate workaround: discard the first generated value, and start using the second. Well, it is not expecially elegant, but it is cheap and it works fine.

As we can see in the output of the example (when we run it, otherwise trust me), the generated numbers tend to be equally distributed, and that is often what we are expecting by a pseudo-random sequence.

#include <cstdlib>
#include <iostream>
#include <ctime>
#include <vector>
#include <algorithm>

using namespace std;

namespace
{
void dump(vector<int>& v)
{
auto f = [] (int x) { static int i = 0; cout << '[' << ++i << " = " << x << "] "; };
for_each(v.begin(), v.end(), f);
cout << endl;
}
}

int main()
{
vector<int> vec(64, 0);

srand(static_cast<unsigned int>(time(0))); // 1.
rand(); // discard the first value generated

for(int i = 0; i < 6400000; ++i)
{
int value = static_cast<int>((static_cast<double>(rand())/(RAND_MAX + 1)) * 64); // 2.
++vec[value];
}

dump(vec);
system("pause");
}

1. time() returns a time_t value, srand() wants an unsigned int as parameter, with an explicit cast we get rid of a warning about a possible precision loss.
2. As you can see, I was interested in getting pseudo-random numbers in the interval [0..63], so I casted the integer generated by rand() in a double in the interval [0..1), multiplied the result by 64 and casted back to an integer. A lot of casting, but (mostly) harmless.

Go to the full post

Fibonacci multithreading

Let's use the Boost Thread library to develop a simple Fibonacci calculator.

I reckon you already know enough about the Fibonacci function, that is described recursively in this way:
Fibonacci(n) = Fibonacci(n-2) + Fibonacci(n-1)
Where Fibonacci(0) and Fibonacci(1) are defined to be 0 and 1.

If this looks new to you, you can read more about it on wikipedia.

A way of calculating a Fibonacci number would be just sit and wait for the user to tell us which Fibonacci number he actually wants, and only then starting to calculate it. But that would be a loss of time. While the user is pondering on his choice, we could already start doing the job, and putting the results in a cache. As the user tells us which actual number he wants we check if we are lucky, and the result is already available.

That means, we should have a couple of threads, one waiting for the user choice, one working on the Fibonacci calculation.

Then, it is quite natural to think about another optimization. The function that performs the Fibonacci calculation could take advantage of the cache, too. This is not very interesting from the point of view of the multithreading design of this piece of code, but it has its (big) impact on the execution time. You could have fun checking the change in the application performance using it.

Here is the first part of the code, the include file for the class Fibonacci declaration:
#pragma once // 1

#include <vector>
#include <boost/thread.hpp>
#include <boost/thread/condition.hpp>

class Fibonacci
{
private:
    std::vector<unsigned int> values_; // 2
    bool optim_; // 3

    boost::mutex mx_; // 4
    boost::condition cond_; // 5
    boost::thread tCalc_; // 6

    void precalc(); // 7
    unsigned int getValue(unsigned int index); // 8
public:
    static const int MAX_INDEX = 40; // 9

    Fibonacci(bool optim = false);
    ~Fibonacci();

    unsigned int get(unsigned int index); // 10
};
1. This code is for MSVC++2010, it supports this useful pragma directive ("once") to avoid multiple inclusion of the same file.
2. This vector is used to cache the intermediate values.
3. Flag for the performance optimization, by default it is not used.
4. Mutex to rule the access to the cache.
5. Condition used to notify to the reader thread waiting on the vector that the writer thread has generated a new value.
6. The thread that takes care of calculating the Fibonacci numbers.
7. The thread (6) executes this function, to precalculate the Fibonacci numbers.
8. Internal function to get a specific Fibonacci number.
9. This is a toy Fibonacci calculator. The biggest Fibonacci number we could get with it is 40.
10. The user of the Fibonacci class could just ask for a Fibonacci number, through this method.

It's quite easy to use the Fibonacci class. What one should do is just create an object and call its method get() passing the Fibonacci number required:
#include <iostream>
#include "Fibonacci.h"

void fib()
{
    Fibonacci fib;

    int input;
    while(true)
    {
        std::cout << "Your input [0 .. "
            << Fibonacci::MAX_INDEX << "]: ";
        std::cin >> input;
        if(input < 0 || input > Fibonacci::MAX_INDEX)
            break;
        std::cout << "Fibonacci of " << input << " is "
            << fib.get(input) << std::endl;
    }
    std::cout << "Bye" << std::endl;
}
Let's have a look now at the Fibonacci class implementation.

Here is the constructor:
Fibonacci::Fibonacci(bool optim) : optim_(optim)
{
    values_.reserve(MAX_INDEX);
    tCalc_ = boost::thread(&Fibonacci::precalc, this);
}
We already know the max size for our vector, so we reserve immediately enough room for it. Then we start a thread on Fibonacci::precalc() - since it is a non static function we should pass to the boost::thread constructor a pointer to the object we are using, that is "this".

The destructor has just to keep waiting for the tCalc thread to terminate, so it just call join() on it. But, since calculating Fibonacci numbers could be a long and boring affair, it is better to call an interrupt() on that thread, to see if we can cut it shorter. In any case we are destroying the object, so no more calculation is required:
Fibonacci::~Fibonacci()
{
    tCalc_.interrupt();
    tCalc_.join();
}
To get a Fibonacci number we call the get() method. What it does, it is trying to read from the vector the Fibonacci number at the index passed by the caller. Notice that in order to access exclusively the vector it use a lock on the mutex we create just for that reason.

If the element is not already available, we just wait on the condition that the other thread does its job. Any time a new Fibonacci number is generated, we'll get a notification, so another check will be done on the vector size, and the user will get another waiting message - if the expected number is not already calculated - or the return value:
unsigned int Fibonacci::get(unsigned int index)
{
    if(index < 0 || index > MAX_INDEX)
        return 0;

    boost::lock_guard<boost::mutex> lock(mx_);
    while(index >= values_.size())
    {
        std::cout << "Please wait ..." << std::endl;
        cond_.wait(mx_);
    }
    return values_.at(index);
}
Here is precalc(), the function that is run by the other thread. It is just a loop that calls getValue(), the core functionality of this class, the real place where the Fibonacci numbers are computed, then it gains exclusive access to the mutex protecting the vector, puts the newly calculated value in it, and finally notify to the other thread that something has changed:
void Fibonacci::precalc()
{
    for(int iteration = 0; iteration <= MAX_INDEX; ++iteration)
    {
        unsigned int value = getValue(iteration);
        boost::lock_guard<boost::mutex> lock(mx_);
        values_.push_back(value);
        cond_.notify_one();
    }
}
And finally, the actual Fibonacci number calculation. To shorten the working time, we can use the cache - in any case it is already there - otherwise we figure out the value recursively calling getValue().
Notice that we put an interruption_point before the internal calls, in this way we can cancel the execution there, when required:
unsigned int Fibonacci::getValue(unsigned int index)
{
    if(optim_)
    {
        boost::lock_guard<boost::mutex> lock(mx_);
        if(index < values_.size())
            return values_.at(index);
    }

    switch(index)
    {
    case 0:
        return 0;
    case 1:
        return 1;
    default:
        boost::this_thread::interruption_point();
        return getValue(index - 2) + getValue(index - 1);
    }
}

Go to the full post

boost::thread_specific_ptr

The boost::thread_specific_ptr is a smart pointer that knows about multithreading and ensures that each of its instances is specifically allocated for the current thread.

To appreciate the difference with a normal smart pointer we can have a look at this example:

#include <iostream>
#include "boost/thread/thread.hpp"
#include "boost/thread/mutex.hpp"
#include "boost/thread/tss.hpp"
#include "boost/scoped_ptr.hpp"

using namespace std;

namespace
{
   class Count
   {
   private:
      int step_;
      static boost::mutex mio_;
      static boost::thread_specific_ptr<int> ptrSpec_;
      static boost::scoped_ptr<int> ptrUnique_;

   public:
      Count(int step) : step_(step) {}

      void operator()()
      {
         if(ptrSpec_.get() == nullptr) // 2.
         {
            ptrSpec_.reset(new int(0)); // 3.
         }

         for(int i = 0; i < 10; ++i)
         {
            boost::mutex::scoped_lock lock(mio_);
            *ptrSpec_ += step_;
            *ptrUnique_ += step_;
            cout << boost::this_thread::get_id() << ": " << *ptrSpec_ << ' ' << *ptrUnique_ << endl;
         }
      }
   };

   boost::mutex Count::mio_;
   boost::thread_specific_ptr<int> Count::ptrSpec_;
   boost::scoped_ptr<int> Count::ptrUnique_(new int(0)); // 1.
}

void dd05()
{
   boost::thread t1(Count(1));
   boost::thread t2(Count(-1));
   t1.join();
   t2.join();
}
Notice the differences between the scoped_ptr and the thread_specific_ptr.

The scoped ptr is initialized when the variable is defined (1.) and, being a "normal" static data member, is available just in one instance for all the objects of the class Count.

On the other side we have the thread specific ptr. Even though is a static object we have a specific copy of it for each Count object. Given that, we can't expect to have it initialized as any other "normal" static data. We have instead to go through a special routine. Before its first usage we should check (2.) if the pointer is not set, if that is the case, we reset() the value of the smart pointer.

Go to the full post

boost::condition

A typical situation in multithreading context is the writing/reading access on a buffer from different threads. The reading could be done only if buffer is not empty and the writing could be done only if the buffer is not full. That means the the read and write threads should have a way to communicate between them.

That is precisely the what a boost::condition could be used for.

Basically, we should do something like that: the writer should check the buffer, if it is full it should wait for the reader thread to free some space for the new data; besides, the writer should notify the reader when it puts a data in the buffer. On the other side, the reader checks the buffer, if it is empty, waits for the writer to make available new data; when it reads an element of the buffer it pops it, and then notify the other thread that more room is available.

Let's see this example:

#include <iostream>
#include "boost/thread/thread.hpp"
#include "boost/thread/mutex.hpp"
#include "boost/thread/condition.hpp"
#include "boost/circular_buffer.hpp"

using namespace std;

namespace
{
class Buffer : private boost::noncopyable
{
private:
enum { BUF_SIZE = 3 };
boost::condition cond_;
boost::mutex mcb_; // mutex on the buffer
boost::mutex mio_; // mutex for console access

boost::circular_buffer<int> cb_;

public:
Buffer(int size = BUF_SIZE) : cb_(size) {}

void put(int i)
{
{
boost::mutex::scoped_lock lock(mio_);
cout << "sending: " << i << endl;
}

// acquire exclusive access on the buffer
boost::mutex::scoped_lock lock(mcb_);
if(cb_.full())
{
{
boost::mutex::scoped_lock lock(mio_);
cout << "Buffer is full. Waiting..." << endl;
}

// the buffer is full: wait on the lock for a notification
while (cb_.full())
cond_.wait(lock);
}
cb_.push_back(i);
// notify to the other thread that a new item is available
cond_.notify_one();
}

int get()
{
// acquire exclusive access on the buffer
boost::mutex::scoped_lock lock(mcb_);
if(cb_.empty())
{
{
boost::mutex::scoped_lock lock(mio_);
cout << "Buffer is empty. Waiting..." << endl;
}

// the buffer is empty: wait on the lock for a notification
while (cb_.empty())
cond_.wait(lock);
}

int i = cb_.front();
cb_.pop_front();
// notify to the other thread that the buffer is not full anymore
cond_.notify_one();

{
boost::mutex::scoped_lock lock(mio_);
cout << i << " received" << endl;
}

return i;
}
};

const int ITERS = 20;

void writer(Buffer& buf)
{
for (int n = 0; n < ITERS; ++n)
buf.put(n);
}

void reader(Buffer& buf)
{
for (int x = 0; x < ITERS; ++x)
buf.get();
}
}

void dd04()
{
Buffer buf;

// notice the use of boost::ref
boost::thread t1(&reader, boost::ref(buf));
boost::this_thread::sleep(boost::posix_time::milliseconds(5));
boost::thread t2(&writer, boost::ref(buf));

t1.join();
t2.join();
}

The main thread generates two working threads, one on the reader free function and one on the writing free function. Both of them work on the buffer, that is allocated in the main thread and it is passed by reference to the read/write threads. It is necessary to use boost::ref(), otherwise instead of a reference to the Buffer object a copy of it is passed to the free functions.

But the point of this example is in the mechanism of condition wait on a lock, to leave time to the other threads to do something that it is required to the actual thread to go on with its job, and the notify on a condition, to let know to the other threads that something has changed in that context.

Go to the full post

boost::circular_buffer

A circular buffer is quite a useful concept. It is not difficult to adapt a "normal" container to emulate it. Basically, when we are going forward and we reach its end we should just move back to the beginning.

But, as usual, it is not worthy to reinvent the wheel, if someone else it is providing it to us, it works, and it does what we are expecting from it.

This is the case of the circular buffer concept and its boost implementation boost::circular_buffer.

Here is a short example to show how it works:

#include <iostream>
#include "boost/circular_buffer.hpp"

using namespace std;

namespace
{
template <class T>
void dump(boost::circular_buffer<T>& buf)
{
copy(buf.begin(), buf.end(), ostream_iterator<T>(cout, " "));
cout << endl;
}
}

void cirbuf() {
const int BUF_SIZE = 3;
// Create a circular buffer with a capacity for 3 integers.
boost::circular_buffer<int> cb(BUF_SIZE);

// Insert some elements into the buffer.
for(int i = 0; i < BUF_SIZE; ++i)
{
cb.push_back(i);

cout << i << ' ';
if(cb.full())
cout << "buffer full" << endl;
else
cout << "there is still room in the buffer" << endl;
}

dump(cb);

for(int i = 0; i < BUF_SIZE; ++i)
{
cout << cb.front() << ' ';
cb.pop_front();
if(cb.empty())
cout << "buffer empty" << endl;
else
cout << "there is still stuff in the buffer" << endl;
}
}

Go to the full post

boost::thread on a functor

It is possible to create a boost::thread passing to its constructor a functor, in this way we could keep a state for our new thread as private data member of the class representing the functor.

Let's write an application where a couple of threads are create and loop using a private data member. It is worthy noting that since we are accessing a shared resource - the console, where we are printing some logging from our threads - we must use a mutex, to rule the access to it.

#include <iostream>
#include "boost/thread/thread.hpp"
#include "boost/thread/mutex.hpp"

using std::cout;
using std::endl;

namespace
{
class Count
{
static boost::mutex mio_; // 1.
int multi_;
public:
Count(int multi) : multi_(multi) { }

void operator()()
{
for (int i = 0; i < 10; ++i)
{
boost::mutex::scoped_lock lock(mio_); // 2.
cout << "Thread " << boost::this_thread::get_id() << " is looping: " << i*multi_ << endl;
}
}
};

boost::mutex Count::mio_; // 3.
}

void dd02()
{
cout << "Main thread " << boost::this_thread::get_id() << endl;
boost::thread t1(Count(1));
boost::thread t2(Count(-1));
t1.join();
t2.join();
cout << "Back to the main thread " << endl;
}

1. this mutex is an implementation detail of Count, it is unique in the class, permitting different threads to access the resource. That's way it is static private.
2. the mutex protect the access to the console, we use a scoped_lock, that means the lock is automatically released when the code reach the end of scope. In this way all threads have a fair chance to get access to the resource.
3. since the mutex is a static member of Count, we should remember to define it.

Actually, in such a simple case it is not worthy to create a class, a free function would be enough, expecially when we realize that we can pass to the thread constructor the parameters for the function - a clever trick from the designer of the thread class.

So, let's rewrite the example using just a free function:

#include <iostream>
#include "boost/thread/thread.hpp"
#include "boost/thread/mutex.hpp"

using namespace std;

namespace
{
boost::mutex mio;

void count(int multi)
{
for(int i = 0; i < 10; ++i)
{
boost::mutex::scoped_lock lock(mio);
cout << boost::this_thread::get_id() << ": " << multi * i << endl;
}
}
}

void dd03()
{
boost::thread t1(&count, 1);
boost::thread t2(&count, -1);

t1.join();
t2.join();
}

Go to the full post

boost::thread

It's quite easy to create a new thread in an application using the boost::thread library.

In its simpler form, we just pass a pointer to a free function to the constructor of a boost::thread object, and that's it. We just have to remember to call a join on the thread object to let the main thread pending on the newly created one.

Here is an example that shows how the thing works:

#include <boost/thread/thread.hpp>
#include <iostream>

using namespace std;

namespace
{
void hello()
{
cout << "This is the thread " << boost::this_thread::get_id() << endl;
}
}

void dd01()
{
cout << "We are currently in the thread " << boost::this_thread::get_id() << endl;
boost::thread t(&hello);
t.join();
cout << "Back to the thread " << boost::this_thread::get_id() << endl;
}

Not much more to say about this example. The boost::thread object is started in the function hello(), that just print its thread id - calling the function get_id(). The main thread waits for the other thread to complete - thanks to the call to join() - then terminates.

Go to the full post

std::transform

If we want to apply the same change to all the elements in a sequence, it is nice to use the std::transform algorithm. As usual, we pass to it a predicate that is delegate to perform the actual change.

If the transformation is not banal we usually write a functor to store the rule that has to be applied. At least, that's what we did when there was no boost or C++0x available.

In the example we see how to implement the transformation "increase the value by ten, than reduce the result by 5%" using boost::bind, and then with the C++0x lambda.

#include <iostream>
#include <list>
#include <functional>
#include <algorithm>
#include <iterator>

#include "boost/bind.hpp"

using namespace std;

namespace
{
template<class T>
void dump(list<T>& lst)
{
copy(lst.begin(), lst.end(), ostream_iterator(cout, " "));
cout << endl;
}

void init(list<double>& lst)
{
lst.clear();
lst.push_back(10.0);
lst.push_back(100.0);
lst.push_back(1000.0);
dump(lst);
}
}


void bind05()
{
list<double> values;

init(values);
auto fb = boost::bind(multiplies<double>(), 0.95, boost::bind(plus<double>(), _1, 10));
transform(values.begin(), values.end(), values.begin(), fb);
dump(values);

init(values);
auto fl = [](double d) { return (d + 10) * 0.95; };
transform(values.begin(), values.end(), values.begin(), fl);

dump(values);
}

Why using boost::bind when with C++0x lambda the code is so straigthforward? Well, maybe when you have to use a compiler that does not support C++0x yet.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

std::count_if and std::find_if

Another place where it comes handy to use boost::bind or the lambda expression is as predicate of the conditional version for the STL algorithms count_if and find_if.

We use count_if when we want to get the number of elements in a sequence that respect a clause we specify; find_if is used to find the fist element in a sequence matching our requirements.

Without using boost or C++11 we should define a functor to specify the behavior that we are interested in. As we can see in this example, the new techniques make the code cleaner and more readable:

#include <iostream>
#include <vector>
#include <functional>
#include <algorithm>
#include <iterator>

#include "boost/bind.hpp"

using namespace std;

namespace
{
   template<class T>
   void dump(vector<T>& vec)
   {
      copy(vec.begin(), vec.end(), ostream_iterator<T>(cout, " "));
      cout << endl;
   }
}

void bind04()
{
   vector<int> vec;

   vec.push_back(12);
   vec.push_back(7);
   vec.push_back(4);
   vec.push_back(10);
   dump(vec);

   cout << endl << "Using boost::bind" << endl;

   cout << "Counting elements in (5, 10]: ";

   // 1
   auto fb = boost::bind(logical_and<bool>(),
   boost::bind(greater<int>(), _1, 5),
   boost::bind(less_equal<int>(), _1, 10));
   int count = count_if(vec.begin(), vec.end(), fb);
   cout << "found " << count << " items" << endl;

   cout << "Getting first element in (5, 10]: ";
   vector<int>::iterator it = find_if(vec.begin(), vec.end(), fb);
   if(it != vec.end())
      cout << *it << endl;

   cout << endl << "Same, but using lambda expressions" << endl;

   cout << "Counting elements in (5, 10]: ";

   // 2
   auto fl = [](int x){ return x > 5 && x <= 10; };
   count = count_if(vec.begin(), vec.end(), fl);
   cout << "found " << count << " items" << endl;

   cout << "Getting first element in (5, 10]: ";
   it = find_if(vec.begin(), vec.end(), fl);
   if (it != vec.end())
      cout << *it << endl;
}
1. since we use the same predicate a couple of times it's a good idea storing it in a local variable (here using the cool C++11 'auto' keyword to save the time from figuring out the correct type definition for the predicate). The construct is a bit verbose, but should be clear enough in its sense: we are looking for a value in the interval (5, 10]; count_if will count all the elements in the sequence respecting this rule; find_if will return the iterator to the first element for which that is valid - or end() if no one will do.
2. it is so much cleaner implementing the same functionality using the C++11 lambda syntax.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

std::sort

To use the STL sort algorithm on a container, the type used by the container should define a less-than operator.

But what if we are required to order the container in different ways? We could pass to sort a predicate that defines our specific less-than operator.

If we are not using boost or C++0x usually this means we create a specific functor just for that.

Here we see an example to accomplish this task using boost::bind or a lambda expression:

#include <iostream>
#include <string>
#include <vector>
#include <functional>
#include <algorithm>
#include <iterator>

#include "boost/bind.hpp"

using namespace std;

namespace
{
class PersonalInfo
{
string name_;
string surname_;
unsigned int age_;

public:
PersonalInfo(const string& n, const string& s, unsigned int age) :
name_(n), surname_(s), age_(age) {}

string name() const { return name_; }

string surname() const { return surname_; }

unsigned int age() const { return age_; }

// 1. define the operator less-than based on the first name
friend bool operator< (const PersonalInfo& lhs, const PersonalInfo& rhs)
{ return lhs.name_ < rhs.name_; }

};

// 2.
ostream& operator<< (ostream& os, const PersonalInfo& pi)
{
os << pi.name() << ' ' << pi.surname() << ' ' << pi.age() << endl;
return os;
}

void dump(vector<PersonalInfo>& vec)
{
copy(vec.begin(), vec.end(), ostream_iterator<PersonalInfo>(cout));
cout << endl;
}
}

void bind03()
{
vector<PersonalInfo> vec;
vec.push_back(PersonalInfo("Little", "John", 30));
vec.push_back(PersonalInfo("Friar", "Tuck", 50));
vec.push_back(PersonalInfo("Robin", "Hood", 40));
dump(vec);

cout << "Default sorting (by name):" << endl;
sort(vec.begin(), vec.end()); // 3.
dump(vec);

cout << "By age:" << endl; // 4.
sort(vec.begin(), vec.end(), boost::bind(
less<unsigned int>(),
boost::bind(&PersonalInfo::age, _1),
boost::bind(&PersonalInfo::age, _2)));
dump(vec);

cout << "By surname:" << endl;
sort(vec.begin(), vec.end(), boost::bind(
less<string>(),
boost::bind(&PersonalInfo::surname, _1),
boost::bind(&PersonalInfo::surname, _2)));
dump(vec);

cout << "Using Lambda, sorting by age:" << endl; // 5.
auto fa = [](PersonalInfo p1, PersonalInfo p2) { return less<unsigned int>()(p1.age(), p2.age()); };
sort(vec.begin(), vec.end(), fa);
dump(vec);

cout << "Using Lambda, sorting by surname:" << endl;
auto fs = [](PersonalInfo p1, PersonalInfo p2) { return less<string>()(p1.surname(), p2.surname()); };
sort(vec.begin(), vec.end(), fs);
dump(vec);
}

1. since we define the operator less-than for this class, it is possible sort its objects
2. let's the ostream know how to print it
3. first call to sort(), using the default less-than operator
4. let's use boost::bind to create a ordering predicate on the fly. We use the STL less functor passing it the two elements in the sequence that the sort algorithm provides
5. same as 4., but using C++0x lambda expressions. It is not necessary storing the lambda expression in a local variable, but it looks to me in this way the code is a bit more readable; and if you use the cool C++0x feature of type inference, by the keyword auto, it is neat and clear.

The code is based on an example provided by "Beyond the C++ Standard Library: An Introduction to Boost", by Björn Karlsson, an Addison Wesley Professional book. An interesting reading indeed.

Go to the full post

Lambda for Fibonacci

I have found out an interesting example on an MSDN page that shows some lambda expression features.

I have reworked it a bit, and here you have the result:

#include <algorithm>
#include <iostream>
#include <vector>

namespace
{
    void dump(const std::vector< v)
    {
        std::for_each(v.begin(), v.end(), [](int n) { std::cout << n << ' '; }); // 1
        std::cout << std::endl;
    }
}

void fibonacci(size_t size)
{
    std::vector<int> vec(size, 1); // 2
    dump(vec);

    int base = 0; // 3 
    std::generate_n(vec.begin() + 2, size - 2, // 4
        [base, <vec]() mutable throw() -> int // 5
        {
            return vec[base++] + vec[base]; // 6
        });

    dump(vec);
    std::cout << base << std::endl; // 7
}
1. The first usage of a lambda expression here is not much interesting, it helps the for_each construct to output the vector elements to the console. The currently scanned element is assigned to the lambda parameter that is then used in the lambda body.
2. This vector is created using the size passed by the user, and initializing each of its values to one. Performance-wise this is not an optimal solution, but we can leave with it in this context.
3. in the variable base we keep the index of the first element in the vector that we need to get to calculate the next fibonacci value.
4. The generate_n() STL algorithm operates on a sequence, starting from the element pointed by the iterator passed as first parameter, iterating for the number of times passed as second parameter, applying the predicate specified as third parameter. It is here that we have an interesting lambda expression.
5. In the introduction clause we see the base variable passed by value, and the vec by reference. These variables will be accessible in the lambda body. The "mutable" specification means basically that the function local variable are copied to the lambda expression, so no change done there affect the original ones. The "throw()" clause tells that we are not supposed to throw any exception in the lambda expression. The "arrow" is actually the return type clause, in this case specifying that the lambda expression returns an int, in this case it is not mandatory, the compiler is smart enough to find out which type is the return value.
6. Notice that "base" is increased, this is the reason why we must specify this lambda expression as "mutable". By default you can't do that on a lambda by value parameter.
7. Here we'll see that the local "base" has not been changed.

Go to the full post