Multi-Processor Generalized Data Flow Programming Model

The Multi-Processor Generalized Data Flow (MPGD) programming model is a programming model used to construct networks or models of computations that communicate using timestamped events, and execute according to a specific semantics. The framework is intended to be modular in that it is easy to create new execution semantics for an application that uses the programming model.

The MPGD framework is composed of a thread scheduler built within the Group Scheduling framework, a library used by user-space processes to interface with the scheduler, and an MPGD configuration that implements a particular semantics.

Group Scheduling

A brief overview of Group Scheduling is given and its use in implementing the MPGD programming model. Group Scheduling is a project maintained by the KUSP research group that allows thread schedulers with arbitrary semantics to be created and integrated with Linux in a significantly easier and more accurate way than using ad-hoc methods such as priority-mappings. The KUSP group website is located at where more information about Group Scheduling can be found.

Group Scheduling is a hierarchical scheduling framework that facilitates the creation of schedulers with arbitrary semantics by allowing a developer to directly implement a desired application semantics. This is in stark contrast to traditional approaches that require an application semantics to be mapped onto the programming model exported by the operation system, most typically this is a priority model. Serious problems are often encountered when creating semantic mappings. These include complex mapping rules that are difficult to develop and debug, as well as loss of fidelity in the implementation that makes the task of modeling for verification difficult and error prone. Furthermore, when multiple application semantics co-exist on a single system their access to share resources must be managed in order to properly support a given application’s semantics. Group Scheduling solves these problems by allowing applications to explicitly represent their semantics using any data structures that are appropriate. Finally, the semantics of programming models created in Group Scheduling can be easily integrated with concurrency control in a general method that avoids hard-coded semantic integration such as priority inheritence in Linux.


Figure 1: Group Scheduling MPGD Overview

In Figure 1 a sample MPGD application is shown that is controlled by Group Scheduling. The application is composed of three threads shown at the bottom of the figure. Each thread in the application represent one or more actors that are arranged in a specific network or model. Threads are scheduled by the MPGD thread scheduler according to a specific semantics, but in general these semantics schedule a thread only when one of the actors it represents may process an event that has been sent to it. When an actor may process an event is determined by a modular set of semantics called an MPGD configuration.

At the top of the figure is the MPGD thread scheduler implemented in Group Scheduling, and labled MPGD. The MPGD scheduler contains three thread members labeled T1, T2, and T3. The member labeled Linux represents all other computation on the system. It should be noted that the hierarchy shown is simpified, and that the Linux member and the MPGD group may themselves be members of a different root group that gives the MPGD exclusive preference over any Linux computation. Other arrangements are possible, such as making system-level computations such as the network soft-IRQ part of an MPGD computation where that computation relies on network activity.

Each of the thread members contains state information used to determine when a thread is to be scheduled. For example, the sequence of timestamps on events sent to an actor may be used to calculate when an actor may be scheduled and an event fired. In this case the timestamp sequence is maintained in the thread scheduler in order to explicitly represent this particular scheduling semantics and algorithm.


The MPGD architecture can be broken down into three components, (1) a thread scheduler that represents applications composed of event-driven threads (2) a user-space library that applications use to communicate with the scheduler, and (3) an implementation of a given scheduling semantics.

MPGD Network Socket Integration

The MPGD platform integrates with the Linux network stack to automatically extract application events that pass through TCP or UDP sockets. Sockets are used to connect platforms across a network, and may also be used for actor-to-actor communication.


Figure 2: Threads sending and receiving network events

Figure 2 illustrates the flow of a data between three threads communicating over sockets. At the top-left of the figure is an input port socket connection delivering events to Thread-1. Thread-1, Thread-2, and Thread-3 also communicate with each other using sockets local to the platform. In the MPGD framework an actor will send an event to another actor over a socket-based connection. The event being transmitted contains a timestamp that is required by the execution strategy, and is used to determine when the receiving thread may run to consume the packet containing the event. To accomplish this the MPGD framework implements hooks in the network stack to monitor network sockets for MPGD events being delivered to an application, automatically extracting scheduling information from the events (i.e. the timestamp of the event).


Figure 3: Events Are Extracted From Socket Channels

Figure 3 shows a detailed view of packet extraction from socket-based communication channels. The top-left depicts a network connection over which events are sent. Events in the diagram are the two-color rectangles labeled TS/D (for timestamp and data payload, respectively). An event being sent to the platform enters the network stack and is extracted by the MPGD network extensions. Based on the configuration of MPGD an extracted event is placed on an input queue associated with the thread implementing the actor to which the event is destined. The input queue of events is maintained by the scheduling framework (as opposed to the socket queue) because events in a queue may need to be reordered or examined. A thread scheduled by the MPGD scheduler will consume events directly from the MPGD input queue maintained for that thread. The red arrow exiting the threads in Figure 3 depict an output port over which generated events flow back to the network stack for extraction by the MPGD network extensions.

Custom Application Event Delivery

Computations in MPGD do not have to communicate through sockets, and may implement their own form of event delivery. The MPGD framework provides an interface through which user-space applications that take responsibility for event delivery may update scheduling information within Group Scheduling. Figure 4 depicts an application composed of three actors, A3, A4, and A5, communicating through a specialized datastructure labeled DS. In this configuration the Group Scheduling framework continues to maintain information necessary to make scheduling decisions, and the application is responsible for updating scheduling state information.


Figure 4: Events Delivered Using Application-Specific Mechanism

Default Scheduling Semantics

The MPGD scheduler maintains a linked list per-CPU on to which runnable threads are placed. These lists are effectively the run-queues of the MPGD scheduler. The default behavior of the MPGD scheduler is to order the linked list according to events present at the ports of the actors implemented by the threads on the run-queue. The threads are then scheduled in order of earliest event first. The EDF policy of the MPGD scheduler can be used by MPGD configurations and execution strategies, or other semantics may be implemented to replace the EDF policy.

Making Threads Runnable

A thread is runnable when it has one or more events at input ports, and these events are safe to process. A thread will only be scheduled when at least one event is safe to process, thus some mechanism is required for delaying the execution of a thread that does not have pending events that are safe to process.

Initially a thread is not runnable, and has not received events. When an event is received on the port of an actor a high-resolution is scheduled to run at the earliest time of either the safe-to-process time of the current earliest event, or the newly received event. When the timer expires the thread implementing the actor is scheduled by placing it onto the EDF ordered run-queue.

A thread will run until it fires all events that are safe-to-process, at which point a new timer will be scheduled for the next earliest event that is safe to process, and the thread will be removed from its run-queue.

Data Structures


Time is represented by MPGD using two different data structures depending upon where within the framework time is being used. In user-space time is represented using an instance of struct timespec that uses the fields tv_sec and tv_nsec to represent time in nanosecond resolution. Within the kernel (e.g. the MPGD thread scheduler) time is represented using an instance of ktime_t which is a 64-bit scalar representation of time supporting nanosecond resolution. Timing information is automatically converted between struct timespec and ktime_t formats as it passes between user-space and kernel-space in the MPGD framework. The use of struct timespec was chosen to represent time in user-space because it is a common, familiar format while ktime_t is the representation required by the Linux kernel when interacting with high-resolution timers.

High-Resolution Timers

Linux contains a high-resolution timing subsystem called hrtimers. This subsystem is capable of nanosecond resolution timing when running on supported hardware. Clocks in Linux can be synchronized using external sources, allowing a distributed system to have high-resolution, sychronized timing facilities. The MPGD framework uses Linux’s high-resolution timers to implement time-based scheduling polices.

Representation of Infinity

// struct timespec format
#define MPGD_TSPEC_MAX_SEC      (1<<30)
#define MPGD_TIMESPEC_POS_INF   { .tv_sec = MPGD_TSPEC_MAX_SEC, .tv_nsec = 0 }
#define MPGD_TIMESPEC_NEG_INF   { .tv_sec = (-MPGD_TSPEC_MAX_SEC), .tv_nsec = 0 }

// ktime_t format
#define MPGD_KTIME_POS_INF      (ktime_set(MPGD_TSPEC_MAX_SEC, 0))
#define MPGD_KTIME_NEG_INF      (ktime_sub(ktime_set(0, 0), ktime_set(MPGD_TSPEC_MAX_SEC, 0)))

The large magnitude value MPGD_TSPEC_MAX_SEC is used to represent positive infinity. It’s corresponding negative value -MPGD_TSPEC_MAX_SEC represents negative infinity. Both positive and negative values specified in user-space and used in the MPGD scheduler must be comparable. To accomplish this the in-kernel value of negative and positive infinity, MPGD_KTIME_[POS|NEG]_INF is expressed in terms of the user-space value, MPGD_TIMESPEC_[POS|NEG]_INF.


Events are represented in the MPGD framework using a semi-structured representation. Each event contains a header that stores the timestamp of the event, and a payload which is an opaque data store that contains application specific data. Currently the total size of an event payload is a configurable, fixed size.

Event Header

#define MPGD_MAGIC_NUM 0x4321abcd
struct mpgd_event_header {
        int magic;
        struct timespec timestamp;

The event header contains a integer field (magic) which contains a known value. This is used to detect possible errors in transmission over socket-based event delivery channels. Each header contains a timestamp in the user-space compatible struct timespec format.

User-space Representation

#define MPGD_EVENT_SIZE 32
struct mpgd_event {
        struct mpgd_event_header header;
        char payload[MPGD_EVENT_SIZE];

The user-space view of an event is an instance of struct mpgd_event. This structure contains a header for the event and the payload. The header is examined by the MPGD framework for timing information. The payload is never interpreted by the MPGD framework.

Kernel-space Representation

struct __mpgd_event {
        struct list_head queue_ent;
        ktime_t timestamp;
        struct mpgd_event data;

The kernel-space representation of an event contains an instance of the user-space view of an event as well as two additional fields timestamp and queue_ent. The additional timestamp field contains the timestamp in the event’s header stored in ktime_t format for use with Linux’s high-resolution timers. The queue_ent field is used to place the event on a linked list implementing per-port input queues.


A port in the MPGD framework is represented internally as an instance of struct mpgd_port.

struct mpgd_port {
        int type;
        int idx;
        int direction;

        struct gsched_member *member;

        /* specific to network socket ports */
        struct socket *socket;
        struct list_head queue;

        int inuse;


Type: The type field a of port, either Socket-based, or custom. A socket-based port is monitored for MPGD events. Timestamp information is automatically extracted and available for use by the framework. Event transferred on socket-based connections are put on per-port queues so that MPGD configurations can re-order the list. A custom port stores timing information and knowledge of events, but actual event delivery is handled in an application specific way (e.g. shared-memory applications can pass events using custom data structures for optimization).

idx: An integer value used to identify the port. This is also used as an index into various per-port look-up tables, such as the adjacency matrix that describes a MPGD model.

direction: Either input or output.

member: The Group Scheduling member (thread/process) to which this port is logically connected.

socket: The kernel-level (internal) representation of a socket. This is only used when the port is a socket-based port.

queue: Holds events extracted from socket-based ports. This is the input queue for a socket-based input port.

inuse: Internal management flag used to indicate that the port has been allocated and is not available to be used as a new port.


struct mpgd_run_queue {
        struct list_head wait_list;
        struct list_head run_queue;

The per-CPU run-queues used to implement scheduling policies. Currently MPGD schedules members (i.e. threads/processes) from the run_queue field using a semantics that considers each member on the run_queue sequentially. A specific MPGd configuration can implement higher-order semantics by re-ordering the run-queue. For example, an MPGD configuration may order the run-queue according using timing information of pending events to easily implement an EDF policy. The wait_list field is not required to be used, but may be convienent for storing a set of members that should not be scheduled. Alternatively, an MPGD configuration may choose to implement alternative data structures from which Group Scheduling members are scheduled.

Per-Group Data

struct mpgd_group_data {
        struct mpgd_config *config;

        struct mpgd_port ports[MPGD_MAX_PORTS];

        ktime_t delta0[MPGD_MAX_PORTS][MPGD_MAX_PORTS];
        ktime_t physical_delay[MPGD_MAX_PORTS];
        ktime_t offset[MPGD_MAX_PORTS];
        int dependency_cut[MPGD_MAX_PORTS][MPGD_MAX_PORTS];
        int input_group[MPGD_MAX_PORTS][MPGD_MAX_PORTS];

Per-Member Data

struct mpgd_member_data {
        struct hrtimer next_event;
        ktime_t next_event_ts;

        struct mpgd_port *ports[MPGD_MAX_PORTS];

Port Allocation/Registration

Ports used in an MPGD controlled application must be allocated and initialized before being used. The routine mpgd_alloc_port is used to allocate a port.

  • int mpgd_alloc_port(int gsfd, struct mpgd_port *p, int direction);
    • gsfd: Group Scheduling file descriptor
    • p: A struct mpgd_port used by the application


/* declare application ports */
struct mpgd_port source_in, source_computation_out;
struct mpgd_port source_feedback_out;

/* allocate/register ports with scheduler */
mpgd_alloc_port(gsfd, &source_in, MPGD_PORT_INPUT);
mpgd_alloc_port(gsfd, &source_computation_out, MPGD_PORT_OUTPUT);
mpgd_alloc_port(gsfd, &source_feedback_out, MPGD_PORT_OUTPUT);

Socket-based Event Channel Integration

Figure 3 illustrates the integration of sockets into the MPGD framework. The following in-kernel API is used to implement the integration, including facilities to monitor sockets for MPGD events.

int mpgd_register_socket(struct socket *sock, struct gsched_member *member, struct mpgd_port *port);
void mpgd_sentto(struct sock *sk, struct sk_buff *skb);
void mpgd_remove_member(struct gsched_member *member);
void mpgd_release_member(struct gsched_member *member);


Specifies a mapping between a socket representing an MPGD port and the Group Scheduling member using the port. This information is used to identify sockets that will carry events.


Identifies socket buffers that contain MPGD events. The socket over which the buffer is being sent is assumed to contain non-fragmented MPGD events contained in the socket buffer. If the socket being considered is not registered as an MPGD socket the function immediately returns.


Called by the Group Scheduling framework when a member is removed from the MPGD group. This routine removes mappings between any sockets and the member.


Called by the Group Scheduling framework when it is safe for the MPGD framework to remove any datastructures that may reference the member.

MPGD Configurations

Description of the configuration “module” idea. Esentially it is quite easy now to implement PTIDES execution strategies because they are simply MPGD configurations.

Data Structures

struct mpgd_config {
        int id;
        char name[MPGD_MAX_STR];
        int (*init)(struct gsched_group *);
        void (*enqueue)(struct gsched_group *, struct gsched_member *, struct rq *, int);
        void (*dequeue)(struct gsched_group *, struct gsched_member *, struct rq *, int);
        void (*receive_event)(struct gsched_member *, struct mpgd_port *, struct __mpgd_event *);

Available Configurations

The following describes specific MPGD configurations that have been created.

PTIDES (Strategy C)

This section describes the a PTIDES execution strategy referred to strategy c from its naming in RTAS `09 paper. The strategy is implemented as an MPGD configuration.


Naming should reflect the strategy as being a MPGD configuration and not only prefixed by mpgd_.

Execution Strategy


  • Sections below reference this execution strategy frequently thus this section has been created to avoid redundancy.


The following routines are used by an MPGD application using the PTIDES Strategy C configuration. They are used to specify configuration specific values used by the execution strategy.

int mpgd_set_delta0(int gsfd, struct mpgd_port *i, struct mpgd_port *o, struct timespec *time);
int mpgd_set_physical_delay(int gsfd, struct mpgd_port *i, struct timespec *time);
int mpgd_add_cut_port(int gsfd, struct mpgd_port *a, struct mpgd_port *b);
int mpgd_add_input_group_port(int gsfd, struct mpgd_port *a, struct mpgd_port *b);


Set the delta0/d0 for port i and port o, d0(i, o). The default value between pairs of ports that are not explicitly set is positive infinity. Thus, use this routine to set non-infinity d0 values that specify a connection between ports. These values are also used to construct connection graph which internally is represented by an adjanency matrix.


Set the value of the physical delay function for an input port i. The default value is negative infinity.


Add a port b to the dependency cut of port a.


Add a port b to the input group of port a. The input group of a port a are other input ports of the same actor that affect the same output ports as port a does.

Data Structures

The following data structures are used in by the execution strategy within the MPGD configuration.


Clear limitations exist on the number of ports supported given the memory requirements of the following structures (e.g. the adjacency matrix). The abstraction layer presented by the MPGD configuration allow these to be replaced with more efficient representations at later stages.
ktime_t physical_delay[MPGD_MAX_PORTS];
ktime_t offset[MPGD_MAX_PORTS];
int dependency_cut[MPGD_MAX_PORTS][MPGD_MAX_PORTS];
int input_group[MPGD_MAX_PORTS][MPGD_MAX_PORTS];


Holds the value of the delta0/d0 function between a pair of ports.


Holds the value of the physical delay function for an input port.


Defines the dependency cut of the port indexed in the first dimension using a boolean value in the second dimension specifying group membership. For example, dependency_cut[2][5] is true if and only if port 5 is in the dependency cut of port 2.


Defines the input group of the port indexed in the first dimension using a boolean value in the second dimension specifying group membership. For example, input_group[2][5] is true if and only if port 5 is in the input group of port 2.


The offset value that is calcuated by the execution strategy prior to the model being run. The offset value calculated depends on the above four function being completely defined.


Application Initialization

An application using this strategy is responsible for specifying all of the timing parameters necessary to calculate the offsets used in the safe-to-process analysis. Once all of the timing information is specified the application may be initialized by calculating the static offset values.

Initialization Entry Routine

This is the routine called by an MPGD application to calculate the static offset values used by the execution strategy.

static int ptides_strategy_c_init(struct gsched_group *group)
        struct mpgd_group_data *gd = group->sched_data;
        struct mpgd_port *port;
        ktime_t offset;
        int ret, i;

8       for (i = 0; i < MPGD_MAX_PORTS; i++) {
                port = gd->ports + i;
10              if (port->direction == MPGD_PORT_INPUT) {
11                      ret = ptides_strategy_c_offset(group, port, &offset);
                        if (ret)
                                return ret;
14                      gd->offset[i] = offset;

        return 0;

Lines 8-10

Iterate over all ports and consider only input ports

Line 11

Calculate the offset value for the input port being considered. The calculated offset value is stored in the local variable offset.

Line 14

The offset value is saved in the offset array for use in safe-to-process analysis.


This is a simple implementation of Dijstrka’s algorithm. TODO: will include later in case some more debugging is required and it changes.

Enqueue Task

This routine corresponds to a thread being made runnable on a CPU (e.g. being awoken waiting on an I/O completion). The CPU run-queues order the threads according to an earliest deadline policy with respect to the timestamp of the events that are on the input queues of actors implemented by the threads on the run-queue.

static void ptides_strategy_c_enqueue(struct gsched_group *group,
                struct gsched_member *member, struct rq *rq, int wakeup)
4       int cpu = cpu_of(rq);
        struct mpgd_run_queue *mpgd_rq;
        struct mpgd_member_data *md = member->sched_data, *tmp_md;
7       struct gsched_member *m;

        /* group's per-cpu data */
10      mpgd_rq = group->cpu_sched_data[cpu];

Line 10

This is a reference to the run-queue for the CPU that this code is being executed on.
12      if (list_empty(&mpgd_rq->run_queue)) {
13              list_add(&member->memlist1, &mpgd_rq->run_queue);
14              return;

Lines 12-13

If the CPU’s run-queue is empty then no calculatations are necessary. The thread is added to the run-queue and immediately returns.
17      list_for_each_entry(m, &mpgd_rq->run_queue, memlist1) {
18              tmp_md = m->sched_data;
19              if (md->next_event_ts.tv64 < tmp_md->next_event_ts.tv64) {
20                      list_add(&member->memlist1, m->memlist1.prev);
21                      return;

        /* the latest */
26      list_add_tail(&member->memlist1, &mpgd_rq->run_queue);

Line 17-18

Iterate m over all threads already in the run-queue. The variable tmp_md will hold the scheduling data specific to the execution strategy for the current thread, m.

Lines 19-21

Compare the timestamp of the next event on the input queue for the thread being added to the event queue with the next event of threads being iterated over. If the thread being added to the run-queue has an event that earlier than the current thread m in the loop, the new thread is added ahead of m and the routine returns.

Line 26

The next event of the thread being added is larger than all other threads in the run-queue and is added to the tail of the queue.

Dequeue Task

Enqueue Event

This routine receives an event (or information about an event), and adds updates the input queue of an actor to represent the received event.

  • TODO:
  • Add safe-to-process calculation, and the what happens if the received event is not safe to process.
  • Add HR-Timer usage in this routine

Dequeue Event

Using MPGD: Example Application

The following outlines the basic usage of the MPGD programming model. This example makes use of the PTIDES configuration described in PTIDES (Strategy C).

Header File Requirements

The following are a general set of header file and macro requirements in MPGD applications.

#define _GNU_SOURCE
#include <stdio.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <string.h>
#include <pthread.h>

/* userspace MPGD library */
#include "mpgd.h"

#include <sched_gsched.h>
#include <linux/gsched_sdf_mpgd.h>

#define gettid() syscall(__NR_gettid)


Required to enable non-standard features.


The MPGD header file that contains an interface to the Group Scheduling framework.


Group Scheduling user-space header.


Definitions specific to interfacing with the MPGD scheduler.


Macro to enable the Linux-specific get thread-id system call. Using getpid() will not work because it does not distinguish between threads.

Pthread Setup

This example application uses a single thread per actor in the PTIDES model. A pthread structure is thus allocated for each actor in the model:

/* pthreads */
static pthread_t clock1_t, clock2_t;
static pthread_t time_delay_1_t, time_delay_2_t;
static pthread_t computation_t;

Port Allocation

Ports should be allocated in a location that is globally accessible by the threads in the application. This is not a requirement, but simplifies the software design. Ports are of type struct mpgd_port and are located in the MPGD header file. Port structures cannot be re-used, and one a port must be created for all input and output ports in the model. The following shows a number of ports allocated for use in a model:

/* ports */
struct mpgd_port port1_out;
struct mpgd_port port2_out;
struct mpgd_port timed_delay1_in;
struct mpgd_port timed_delay1_out;
struct mpgd_port timed_delay2_in;
struct mpgd_port timed_delay2_out;
struct mpgd_port add_subtract_in1;
struct mpgd_port add_subtract_in2;
struct mpgd_port add_subtract_out;

Group Scheduling

All MPGD routines that communicate with Group Scheduling must reference the Group Scheduling framework through the special device driver interface. The easiest way to do this is using a global integer variable used to hold an open file descriptor to the device. This should be made globally available as all threads in the application will be referencing it.

static int gs_fd;

Main Setup

Initialize Group Scheduling

The Group Scheduling interface is built using IOCTL calls into a psuedo-device driver. This fills in the global variable that provides access to the Group Scheduling framework through a open file descriptor.

gsfd = grp_open();
if (gsfd < 0) {
        return gsfd;

Create the Root MPGD Group

Create the Group Scheduling group using the MPGD Scheduler. The name of the new group is called “mpgd”, and the name used to reference the MPGD SDF in the kernel is “sdf_mpgd”.

grp_create_group(gsfd, "mpgd", "sdf_mpgd");

Initialize Ports

Ports allocated by an application for use in a model must be registered with the Group Scheduling MPGD scheduler. This is done using the MPGD library routine mpgd_alloc_port. The following calls register the ports previously allocated.

mpgd_alloc_port(gsfd, &port1_out, MPGD_PORT_OUTPUT);
mpgd_alloc_port(gsfd, &port2_out, MPGD_PORT_OUTPUT);
mpgd_alloc_port(gsfd, &timed_delay1_in, MPGD_PORT_INPUT);
mpgd_alloc_port(gsfd, &timed_delay1_out, MPGD_PORT_OUTPUT;
mpgd_alloc_port(gsfd, &timed_delay2_in, MPGD_PORT_INPUT);

Socket Port Configuration

Ports implemented using sockets require an additional configuration step that configures the port with respect to an existing socket. For example, the following allocates a Linux socket-pair, and initializes a connection to use the two sockets:

int socks[2];

socketpair(AF_UNIX, SOCK_DGRAM, 0, socks);
mpgd_set_port_socket(gsfd, &port1_out, MPGD_PORT_OUTPUT, socks[1]);
mpgd_set_port_socket(gsfd, &timed_delay1_in, MPGD_PORT_INPUT, socks[0]);

Define Model Network

Internally the MPGD framework represents the connections in a model using an adjacency matrix. By default each port-to-port connection is set to a cost of -infinity representing that the connection between two ports does not exist. To specify that a connection exists in a model use non-negative cost values.

All output ports should be connected to their corresponding input port with a cost of zero:

struct timespec ts;

ts.tv_sec = 0; ts.tv_nsec = 0;
mpgd_set_delta0(gsfd, &port1_out, &timed_delay1_in, &ts);
mpgd_set_delta0(gsfd, &port2_out, &timed_delay2_in, &ts);
mpgd_set_delta0(gsfd, &timed_delay1_out, &add_subtract_in1, &ts);

The third parameter is the value being set, expressed using a struct timespec initialized to the desired value.

The delta0 (d0) time between an input port and output port of an actor is also set using mpgd_set_delta0. In this example the value is set to 10 microseconds:

ts.tv_sec = 0; ts.tv_nsec = 10000;
mpgd_set_delta0(gsfd, &timed_delay1_in, &timed_delay1_out, &ts);

Specifying Dependency Cuts and Input Groups

Dependency cuts and input port groups are created using the following routines where port-b is added the relevant set of port-a:

  • mpgd_add_cut_port(gsfd, port-a, port-b);
  • mpgd_add_input_group_port(gsfd, port-a, port-b)
mpgd_add_cut_port(gsfd, &add_subtract_in1, &timed_delay1_in);
mpgd_add_cut_port(gsfd, &add_subtract_in2, &timed_delay2_in);

mpgd_add_input_group_port(gsfd, &add_subtract_in1, &add_subtract_in2);
mpgd_add_input_group_port(gsfd, &add_subtract_in2, &add_subtract_in1);

Finalize Settings

After all parameters are specified the MPGD scheduler must be informed that the a complete specification has been provided. The following call will inform the MPGD scheduler that all relevant information has been uploaded:



Once a configuration has been finalized by calling mpgd_config_init() configuration may be printed for debugging using mpgd_print_config(). The information will be printed to the Linux console, and can be viewing in the syslog using the dmesg program.


Thread/Actor Setup

Actors are implemented using threads using a workloop style programming pattern. The first thing a thread must do is configure itself to be controlled execlusively by Group Scheduling. That is, it must not be scheduled by Linux scheduling semantics. The MPGD scheduler will have exclusive control over the execution of the threads implementing actors. In this example the MPGD scheduler will schedule the threads according to the PTIDES exection strategy for this example. This is done using the following Group Scheduling library routine where gettid() returns the TID of the current thread context:

gsched_set_exclusive_control(gsfd, gettid());

An actor implemented by a thread should follow an event-driven workloop style programming pattern. For example:

while (1) {
    if no-more-events then
    receive event(s)
    process event(s)
    send event(s)

This code block illustrates the basic programming pattern. A thread executes until no more events are available. For each event it receives process takes place before finally sending the events to the next actor in the model.

Receiving Events

An event is received from a specific port using mpgd_recv_event. The routine will return zero on success, otherwise an error value will be returned. The mpgd_recv_event routine sets the flag event.is_active to true or false if an event was retreived or no event was retrieved. An event may not be received when no events exist on the port’s input queue, or when an event with smaller timestamp exists on another input port of the same actor.

The following code illustrates basic handling of an event. The routine mpgd_recv_event receives events into the event structure from port port. If the routine returns a non-zero value an error has occured and should be handled. If no error occurs then the value of event.is_active will have been set to either true or false. When the value is true then the event may be processed according to the semantics of the actor.

    if (mpgd_recv_event(gsfd, &event, &port))

if (event.is_active)


An actor may have multiple input ports in which case each port should be queried at the beginning of the actor’s workloop using mpgd_recv_event. When multiple input ports exists for a given actor, all ports, or a subset of the ports will return events when mpgd_recv_event is called.

  • The event(s) with the smallest timestamp on all input ports is returned.

  • Events with identical timestamps on separate ports are returned by each call to mpgd_recv_event for those ports.
    • Subject to the corresponding timestamp being the smallest among all events on all ports.
  • Ports returning no events will set event.is_active to false.

Consider the following code sample with three input ports (error handling has been omitted for simplicity):

mpgd_recv_event(gsfd, &e1, &p1);
mpgd_recv_event(gsfd, &e2, &p2);
mpgd_recv_event(gsfd, &e3, &p3);

if (e1.is_active)
if (e2.is_active)
if (e3.is_active)

When an input port contains an event with a unique, smallest timestamp among the other ports that event is retreived, and the other ports do not retreive events. If two input ports contain events with the same timestamps , and this timestamp is greater than any event on the remaining input port, both events are returned. The actor must examine the event.is_active flag to implement its own semantics. For example in the above code block each event is examined separatly. Actors may want to process two events together when they have the same timestamp, which is possible by changing the actor’s processing step.

Sending Events

An event is sent to a port using mpgd_send_event. A return value of zero indicates success, otherwise a error value is returned.

mpgd_send_event(gsfd, &event, &port);

When an event is sent via mpgd_send_event the event will be given a timestamp equal to the current time, and be sent to the port for delivery according to the semantics of the execution strategy.

Execution Termination

Execution is terminated by informing the MPGD scheduler, which in turn makes available to each thread executing an actor a boolean termination flag. The scheduler will run each thread, and expects each thread to check this flag and exit properly.

The termination flag is queried using the mpgd_should_exit() function. The following code block illustrates the use of this routine:

while (1) {
        if (mpgd_should_exit())
        if (mpgd_recv_event(gsfd, &event, &port_in))

Single Input/Output

In this code example there is exactly one input and one output port. The workloop receives an event, process the event, and sends it to the output port.

struct mpgd_event event;

gsched_set_exclusive_control(gsfd, gettid());

while (1) {
        if (mpgd_should_exit(gsfd))

        if (mpgd_recv_event(gsfd, &event, &port_in))
        /* process event */
        mpgd_send_event(gsfd, &event, &port_out);

Multiple Input/Output

In this code example there are two input ports, and two output ports. The received event(s) are handled based on there being a one or two events. After the event(s) are processed by the actor they are setup for being sent to the output ports. Note that it is not required that an event be sent to each output port. However, in this example an event is always sent to each output port.

struct mpgd_event e1, e2;

gsched_set_exclusive_control(gsfd, gettid());

while (1) {
        if (mpgd_should_exit(gsfd))

        if (mpgd_recv_event(gsfd, &e1, &port_in1))

        if (mpgd_recv_event(gsfd, &e2, &port_in2))

        if (e1.is_active && !e2.is_active)
        else if (!e1.is_active && e2.is_active)
        else if (e1.is_active && e2.is_active)
        mpgd_send_event(gsfd, &e1, &port_out1);
        mpgd_send_event(gsfd, &e2, &port_out2);