Showing posts with label publish/subscribe. Show all posts
Showing posts with label publish/subscribe. Show all posts

Tuesday, November 8, 2011

Restructuring of Framework Remote Component


Restructuring of Framework Remote Component

 Now that the treatment of the message protocol for communications between the user applications and the display application had been incorporated into the mC framework it became time to restructure the Remote Component to treat the interface to the Display application using the same Ada packages and methods as those for other User applications. 

This took me longer than expected and is not yet finished.  Mostly however because some features were also added that are not strictly related to employing the user application Remote methods in common with the display interface to avoid the duplication that occurred when the cloned user display components were moved into the framework Remote component.  Four of these were

   1) to get application App3 that contains C and C++ user components up-to-date after having been ignored for quite some time while a number of framework changes were being made.
   2) to use Windows interfaces to check which of the other applications of the configuration of the current PC are running,
   3) to use Windows to do the same for a different PC in the configuration,  and
   4) to send a Remote Register Complete topic between running user applications to indicate to a connected application that the sending application has completed its Remote Registration with the receiving application.  Sometime in the future this can be used to avoid transmitting other topics until the sending application is able to determine that the connected application has completed its remote registration with the sending application and hence is ready to treat remote topics.

Another feature to be added will be to monitor the communications between the user application and the display application to check that the two applications remain connected or, if become reconnected, to continue as best as possible from where communications was lost.  Yet another (structure only change) will be to package the remote registration procedures into a Registration unit for easier recognition of their common rationale.

An unrelated additional feature will be to have the first running application of the configuration launch the others.

As part of this change, the configuration file was changed to have a user applications portion and a display applications portion so that one such file can be used both by the user application framework and by the display application.  The user application framework needed this change as part of incorporating the treatment of the display interface via the same methods as the user applications.

While making these changes an error was detected where the application identifier was treated (in various places) as an index into arrays of data concerning applications of the configuration and connected applications.  This previously had not become apparent since the debug cases had been for user applications 1, 2, and 3.  When I added the display application for common treatment, I separated the possible values for the identifiers into one range values for user applications and another range for display applications.  Thus the display application came to be assigned an identifier of 15 while the arrays were sized as 1 to 5 so that a direct index of the identifier could no longer be used.  Further, I could find this sort of thing better if I switched app 2, for instance, to app 5 to have gaps in the user app identifiers which I will need to do.
Above is a figure showing how the user Display components were previously moved into a common framework Display component. 

The following second figure illustrates how the Display interface now uses the same Methods as the user components and how, in the future, it will use the same communications Monitor package and package the remote registration into its own unit.
The new diagram is meant to show that the Remote Component with its thread (the top middle oval to indicate the thread – enclosed icon to indicate the Ada package) includes inputs from and outputs to the tables and reads the Transmit, Receive and Watch Queues upon receiving a wakeup event (that is, the notify of a dataless topic that is published as part of adding the entry to the queue).

The Transmit Queue is written via the publish of a topic from various components while the Receive Queue is written to contain messages received from other applications including the Display App via the receive ports of the MS_Pipe and WinSock communication methods that each have a thread that blocks to wait for a new message from its associated application. 

Received display messages are passed to the Display subpackage that converts them to instances of the Display Event Request topic message and then publishes them for distribution.  In this case the Remote component also queues to the Transmit Queue. 

A received Display Event Response topic (dequeued from the Receive Queue) as well as an unsolicited Display Command topic message are treated via Display as received by the framework (while running under the publishing component's thread).  The Display package supplies the Display protocol header in place of that of the Topic protocol and transmits to the display app via the selected Remote Method.

Hence the Display package is the Display Protocol to Topic Protocol converter and vice versa.  This design, as mentioned in the previous post, allows for the use of other protocols to be added; such as an A661 Protocol.  Only another protocol converter need be added, just as can be done for communication methods, by adding another supported method to be selected by the Method package.  When this becomes necessary a protocol selector package should be added to invoke callback procedures of the particular converter package as is done by Method to select those of MS_Pipe or WinSock.  Other communication drivers can, of course, be added as well where Method would then also contain callback procedure entries to the new method.

Monitor / Registration

The Monitor package will be discussed after monitoring of the user-display application traffic has been implemented.  The Registration package will just be grouping the remote register procedures within such a package. 

Windows Interface to Detect Running Applications

There are supposed to be newer Windows interfaces to return whether a particular process (that is, application) is running on the current PC or on a named one.  Others to start a process on the current PC. 

Since these interfaces don't seem to be available via the GNAT libraries (and certainly not via Win32Ada that contains interfaces to much older versions of Windows), I attempted to access them through a Visual C++ function and link it via gnatlink with the rest of exploratory project.  After various attempts I put that aside to return to later.

I can, however, detect if a specific application is currently running via the Windows interfaces provided with GNAT and Win32Ada.
  BOOL WINAPI EnumProcessModules
  ( __in   HANDLE hProcess,
    __out  HMODULE *lphModule,
    __in   DWORD cb,
    __out  LPDWORD lpcbNeeded
  );
can be used.  To do this, the appRunningC.c function was created and imported to Ada via
  pragma Import(C, App_Running, "appRunningC");

The Ada declaration used was
  function App_Running
  ( Application : in Interfaces.C.Strings.Chars_Ptr
  ) return Interfaces.C.char;
where the Chars_Ptr is used to point to a null terminated character array containing the name of the application along with its path as obtained from the Apps-Configuration.dat file.  The returned character is typecast to a byte value to indicate whether the application is running or not.

The code of appRunningC is
#include <mC-ItfC.h>
#include <ctype.h>
#include <psapi.h>
#include <stdio.h>
#include <string.h>
#include <tchar.h>
#include <windows.h>


// Microsoft Help comment:
//   To ensure correct resolution of symbols, add Psapi.lib to TARGETLIBS
//   and compile with -DPSAPI_VERSION=1
// My comment:
//   For GNAT, include C:\GNAT\2011\lib\gcc\i686-pc-mingw32\4.5.3\libpsapi.a
//   in the gnatlink command. 


void strConvert( char * inStr, char * outStr )
{ // Convert string to all lower case while switching any forward slash
  // in pathname to a backward slash
  int i = 0;
  while (inStr[i] != 0)
  {
     if (inStr[i] == '/')
     { outStr[i] = '\\';
     }
     else
     { outStr[i] = tolower(inStr[i]);
     }
     i++;
  } // end loop
  outStr[i] = 0; // attach trailing null


} // end method strConvert 


int MatchModulePathname( DWORD processID, char * Application )
   { // Check application name at beginning of module list for process for match.


    HMODULE hMods[1024];
    HANDLE hProcess;
    DWORD cbNeeded;
    unsigned int i;


    // Get a handle to the process (that is, application).
    hProcess = OpenProcess( PROCESS_QUERY_INFORMATION |
                            PROCESS_VM_READ,
                            FALSE, processID );
    if (NULL == hProcess)
        return 0; // false


    // Get a list of all the modules in this process.
    if ( EnumProcessModules(hProcess, hMods, sizeof(hMods), &cbNeeded) )
    {


        // Only the first name will contain the application name of the process.
        TCHAR szModName[MAX_PATH], ModName[MAX_PATH];


        // Get the full path to the module's file.
        if ( GetModuleFileNameEx( hProcess, hMods[0], szModName,
             sizeof(szModName) / sizeof(TCHAR)) )
        {
            strConvert( szModName, ModName );


            // Check if module name with path matches that of the Application.
            if (strcmp(Application,ModName) == 0)
            {
               // Release the handle to the process and return found.
               CloseHandle( hProcess );
               return 1; // true
            }
        }


    } // end if ( EnumProcessModules(hProcess, hMods, sizeof(hMods), &cbNeeded) )


    // Release the handle to the process.
    CloseHandle( hProcess );
    return 0; // false


} // end method MatchModulePathname


extern "C" char appRunningC(char * Application )
{ // Determine if Application is running.
    DWORD aProcesses[1024];
    DWORD cbNeeded;
    DWORD cProcesses;
       unsigned int i;


    // Get the list of process identifiers.
    if ( !EnumProcesses( aProcesses, sizeof(aProcesses), &cbNeeded ) )
        return 2; // Application cannot be found due to problem


    // Calculate how many process identifiers were returned.
    cProcesses = cbNeeded / sizeof(DWORD);


    // Examine module names for each process for match to input.
    char AppPath[MAX_PATH];
        strConvert(Application, AppPath);
         for ( i = 0; i < cProcesses; i++ )
         {
            if (MatchModulePathname( aProcesses[i], AppPath ) == 1)
            {
                return 1; // Application found
            }
        }


    return 0; // Application not found


} // end method appRunningC

This routine is currently being used to check whether each application of the configuration (other than the application that is doing the checking) is running.  This can be changed to pass a list of all of the other applications in the configuration to appRunningC and return an array of those that are running to avoid multiple calls to the OpenProcess and EnumProcessModules functions.

GNAT Compiler

While making the various changes it seemed that GNAT GPS no longer rebuilt changed separates.  This seemed to be the case after the addition of the C function for use by the Ada framework Remote-Method-MS_Pipe package to interface to Windows to check what applications of the configuration were running.  Therefore I had to start compiling the packages that I changed with statements such as
c:\gnat\2011\bin\g++ -c -g -IC:\Source\EP\UserApp\Try10 -IC:\Source\EP\Util
  -IC:\Win32Ada\src -aLC:\Source\EP\ObjectC
  C:\Source\EP\UserApp\Try10\mc-message-remote.adb -o mc-message-remote.o
in a batch file (where the indented lines are actually part of the first line) as well as the .c file.  The -g switch includes debug symbols in the .o object file to be included in the eventual .exe file by gnatlink.

Remote Register Complete topic

The implementation of the Remote Register Complete topic took longer than expected.  This topic is both produced and consumed by the Remote framework component.  However, the instance produced by the Remote component of one application is meant for that of another application. 

Therefore the topic used the recently added Requested Delivery method of message exchange so each application could register to consume a delivery identifier that equaled its application identifier.  The Remote component of the publishing application supplies the delivery identifier of the application to receive the topic as it publishes the topic.  Therefore there are as many possible publishers of the topic as there are applications and the same number of consumers.

This is as the message exchange delivery method was intended to treat.  The difference in this case was that the same component was both a publisher and a consumer so its Role was Either.  Since this Role hadn't been previously used, this caused a few problems that had to be tracked down and corrected.  However, another problem caused confusion in identifying what problems were caused by this new role.

That was due to one application of a pair seeming to transmit its instance of the topic to the other application and that application seeming to create its instance but not doing the transmit.  Finally (I must be getting old) I identified the reason.

The registering of a delivery identifier to be consumed involves, as has been done in the past, two variables where the second one is referenced in the Register call.  The first of these is an array of identifier pairs indicating five possible pairs of delivery identifiers that indicate the identifiers the component is to treat.  Each pair indicates a range of values where the second entry can be 0 to indicate that there is no range; only the particular identifier of the first entry.  The second variable contains the count of the number of pairs to examine and the doubly indexed array of values.  Normally, of course, the count will be one.

Registering to consume the Remote Register Complete topic, the Remote Install procedure declared
    Delivery_Id_List
    --| Treat all "Remote Register Complete Topic" messages with ids of the app
    : mC.Itf_Types.Delivery_Id_Array_Type
    := ( ( 1, 1 ), ( 0, 0 ), ( 0, 0 ), ( 0, 0 ), ( 0, 0 ) );
    Delivery_Id
    --| Deliver topic when identifier is that of local app
    : mC.Itf_Types.Delivery_List_Type
    := ( Count => 1,
         List  => Delivery_Id_List );
mimicking what had been done in the past for other Requested Delivery and Delivery Identifier delivery methods.  Except in those cases, the Delivery_Id_List object was declared as constant since the identifiers were known at compile-time. 

In this case, since the pair of delivery ids depends upon the application's own identifier – that isn't known until initialization-time, the value was initialized as shown above and then modified just prior to the Register call to contain the running application's id as shown below.


  Delivery_Id_List := ( ( Integer(Local_App.Id), Integer(Local_App.Id) ),
                        ( 0, 0 ), ( 0, 0 ), ( 0, 0 ), ( 0, 0 ) );


  mT.Remote_Register_Complete_Topic.Request.Data.Register
  ( Participant => Component_Main_Key,
    Detection   => mC.Itf_Types.Event_Driven,
    Role        => mC.Itf_Types.Either,
    Delivery_Id => Delivery_Id,
    Callback    => Monitor.Treat_Register_Complete'access,
    Access_Key  => Remote_Register_Complete_Topic_Access_Key,
    Status      => Data_Status );
where the Count in the Delivery_Id variable remains as 1.

Since this kind of Register (other than the use of Either for the Role instead of Consumer) had been working on the order of a year or more it didn't occur to me that this late resetting of the Delivery_Id_List could be causing any problems. 

However, I finally determined that the compiler was ignoring the resetting of the value to be used by Delivery_Id in the object code that it generated.  So both applications of the pair were specifying that they would consume delivery id 1.  I changed the source code to


  declare


    Delivery_Id_List
    --| Treat all "Remote Register Complete Topic" messages with ids of the app
    : mC.Itf_Types.Delivery_Id_Array_Type
    := ( ( Integer(Local_App.Id), Integer(Local_App.Id) ),
         ( 0, 0 ), ( 0, 0 ), ( 0, 0 ), ( 0, 0 ) );


    Delivery_Id
    --| Deliver topic when identifier is that of local app
    : mC.Itf_Types.Delivery_List_Type
    := ( Count => 1,
         List  => Delivery_Id_List );


  begin

    mT.Remote_Register_Complete_Topic.Request.Data.Register
    ( Participant => Component_Main_Key,
      Detection   => mC.Itf_Types.Event_Driven,
      Role        => mC.Itf_Types.Either,
      Delivery_Id => Delivery_Id,
      Callback    => Monitor.Treat_Register_Complete'access,
      Access_Key  => Remote_Register_Complete_Topic_Access_Key,
      Status      => Data_Status );


  end;

This fixed the problem with Remote of each application transmitting the instance of the topic that was published by it to the other application of the pair where it was then delivered to the
Treat_Register_Complete procedure of the Monitor subpackage.

This might be a case that could have been fixed by pragma Volatile if it had been recognized.  That is, an example of the same thing that can happen when an object is changed by one thread and read by another without the new value being used by the code that was generated by the compiler due to its not recognizing that it could be changed.  (Or, as happens when an address is passed to a procedure that then uses it to change a value used later by the calling routine.)

The remaining gotchas were then immediately found and fixed.

Thursday, January 13, 2011

Some Commentary on a Recent eetimes Article and a Referenced Paper

Some Commentary on a Recent eetimes Article and a Referenced Paper


An eetimes article in their online edition of a 1/10/2011, “Rapid development and reusable design for the connected car” by Kristopher Cieplak, was of interest to me.  See http://www.eetimes.com/design/embedded-internet-design/4212037/Rapid-development-and-reusable-design-for-the-connected-car?Ecosys.  The article refers to a framework that seemed to have similarities to that of my Exploratory Project.  Further, it referenced a paper by another QNX Software Systems author Ben VandenBelt, “Persistent Publish/Subscribe for Embedded Industrial Applications” that can be located at www.qnx.com.  This later paper is about the framework.

Some paragraphs of interest are as follows along with my interspersed comments concerning some of them.  In my comments I use application to refer to either a server or client.  An application can be a server to particular applications of the network and a client of others.  An application can refer to a partition or a separately loadable Windows process.  An application consists of numerous components that can also be a server to other components of the application and a client of others; that is, the component can be moved from one application to another without the need to modify the component.

The VandenBelt paper mentions “send/receive/reply (or synchronous) messaging” which is one of the options that can be selected for mC.  However, VandenBelt states that “Send/receive/reply messaging closely couples sender and receiver.  Every server communicates directly with its clients, and must know how to respond to all client messages.  With messaging thus closely coupled, a change to one software component may require changes to other software components, slowing or hindering system development and increasing system fragility.” 

That is not a problem with mC since the application that treats any particular message topic can be modified with the set of applications determining the application and software component that treats a message at power up via the subscription/registration messages and with the reply returned to whatever component sent the request. 

However, the point is made that “as a system increases in scale and as diverse components are added to it, that system rapidly grows in complexity and becomes increasingly brittle – difficult to upgrade and scale while ensuring performance and reliability.”  mC has never had to grow much so the subscription/registration messages might become a problem.  And, of course, as a one person experiment, there has never been anything done, as yet, concerning an application dropping offline, coming back online, etc. 

mC does use direct point-to-point connections – at least it doesn’t use a series of applications such that one (or more) application is merely passing the message along towards its eventual destination – if that’s what is meant by “direct”.  Of course it doesn’t do that for asynchronous messages either.  It requires that each application with its set of components can communicate directly with each of the other applications that are either its clients or its servers (or both).

The VandenBelt paper refers to an Embedded Design article “Making the case for commercial communication integration middleware” by Jerry Krasner.  It only seems to mean by “direct” that the applications are “socketed” together and that the old applications that now use the new application have to be reworked to take into account the protocol being used by the new application. 

The mC framework supports an integrated set of applications where each uses the same protocol.  However, non-compliant applications can be added to the network by adding a translation component to a compliant application.  For instance, a Display Application was added to the exploratory project to interface to Windows to display Windows forms and controls and transmit click, enter, etc events to compliant applications for treatment.

To avoid the need to modify existing components, except any that must treat the Windows event or produce a modification of a form or control, a translation component was added to each application that must directly interface with one of the Display Application “layers”.  These translation components are receive/transmit components that receive the non-standard message (non-standard in regard to the mC framework protocol) and convert it to contain the standard protocol header while retaining the data of the received message and then publishing it to be forwarded by the framework to the component (in whichever application it might reside in) that had subscribed for it.

The reply, if any, and other messages being sent to the non-standard application are first sent to the transmit sub-component of the translation component (which has previously subscribed for them) to be converted to the protocol of the non-standard application and then sent to it. 

In this way the non-standard application stays as is and the treating component stays as is except for the addition of the translation component to the application of the treating component.  If such a minimal change requires that all the unchanged components be retested, then, to avoid this problem, it should be possible to insert a new translation application (i.e., partition) that takes the place of the translation component.  Then the message would be routed from the non-standard application to the translation application and from it to the standard protocol application and vice versa.

Other paragraphs of interest to me from the Ben VanderBelt paper are:
An Object-based System


Publishing is asynchronous, PPS objects are integrated into the PPS filesystem pathname space.  Publishers modify objects and their attributes, and write them to the filesystem.  When any publisher changes an object, the PPS service informs all clients subscribed to that object of the change.  PPS clients can subscribe to multiple objects, and PPS objects can have multiple publisher as well as multiple subscribers.  Thus, publishers with access to data that applies to different object attributes can use the same object to communicate their information to all subscribers to that object.

This seems directly similar to Upon Demand topics – objects as referred to in the VanderBelt paper – of the mC framework where the consumer component can subscribe to be notified when a new instance of the topic is published.

Push or Pull?

In its default implementation, the QNX PPS service acts as a push publishing systems; that is, publishers push data to objects, and subscribers read data upon notification or at their leisure.

However, some data, such as packet counts on an interface, changes fare too quickly to be efficiently published through PPS using its default push publishing.

QNX PPS therefore offers an option that allows a subscriber to change PPS into a pull publishing system.  When a subscriber that opened an object with this option issues a read() call, all publishers to that object receive a notification to write current data to the object.  The subscriber’s read blocks until the object’s data is updated, then returns with the new data.

With this pull mechanism, the PPS subscriber retrieves data from the publisher at whatever rate it requires – in effect, on-demand publishing.

Persistence

A Persistent Publish/Subscribe service maintains data across reboots. …

System Scalability

With PPS, publisher and subscriber do not know each other; …

The loosely-couple PPS messaging model also simplifies the integration of new software components.  Since publisher and subscriber do not have to know each other, developers adding components need only to determine what these new components should publish, and what data they need other PPS clients to publish.  No fine-tuning of APIs is required, and system complexity does not increase as components are added.


With PPS, components do not even need to be aware of each other’s existence on the system.

The pull publishing concept represented by
“QNX PPS therefore offers an option that allows a subscriber to change PPS into a pull publishing system.  When a subscriber that opened an object with this option issues a read() call, all publishers to that object receive a notification to write current data to the object.  The subscriber’s read blocks until the object’s data is updated, then returns with the new data.

With this pull mechanism, the PPS subscriber retrieves data from the publisher at whatever rate it requires – in effect, on-demand publishing.”
seems interesting.  mC has a concept that somewhat corresponds.  It uses Publish/Subscribe for the Topic Delivery Protocol (versus Point-to-Point) as well as Most Recently Published (MRP) for Topic Permanence.  (Note:  Permanence in mC is different than Persistence.)  For this combination, mC prepares a MRP instance of the published topic each time the publishing component executes and writes the topic.  However the subscribers are not notified.  Instead, each one gets the most recently published data when it reads.  Therefore, the subscriber retrieves the most current data at its own rate.  Since a low priority subscriber could still be reading a previously published instance, a non-used buffer is utilized to publish a new MRP message instance.

The following is extensive quoting the eetimes article in regards to features where the QMX framework seems similar to my exploratory project or could be applicable to it.

Persistent Publish/Subscribe

PPS is an object-based service with publishers and subscribers in a loosely-coupled messaging architecture.  Any PPS client to the service can be a publisher only, a subscriber only, or both a publisher and a subscriber, as required by the implementation.

Publishers and subscribers only need to be able to read and write objects and their attributes in the PPS file system pathname space. Subscribers must of course know what objects and attributes interest them, and publishers must know what objects and attributes may interest subscribers, but neither publisher nor subscriber needs to know anything more about other parts of the system. Objects are written to permanent storage, offering persistence across reboots.

We implemented PPS to handle messaging between Adobe Flash applications and all data source publisher components; that is, for Webkit (browser), Bluetooth, GPS, audio volume control, etc. Chief among the advantages offered by the PPS messaging model is that the API between the components is consistent and loosely coupled.

Just as PPS allows us to redesign our HMI without touching the underlying applications, it allows us to add new componentsvehicle telematics or ITS awareness, for instance — to our QNX CAR implementation without the time-consuming development usually required with other messaging paradigms. All that is needed is that all parties know what they need to publish, and what they need to read in from PPS. Further, this architecture ensures that other components do not require changes to accommodate new additions — changes which, as every software developer knows, at best require exhaustive testing, at worst introduce errors.

PPS thus provides us with a messaging model that allows us to add new components to our in-vehicle system with minimal effort beyond the HMI work required so that users can use the new capabities offered.

Resource separation

The technologies we chose for QNX CAR offer two techniques (beyond standard process and memory protection) for managing the impact of new applications on an in-vehicle system. First, the Adobe Flash-based HMI allows us to run a secondary Flash player whose virtual machine serves as a "sandbox" for running untrusted applications. Second, the QNX  Neutrino RTOS offers adaptive partitioning, a unique technology that dynamically offers unused CPU time to processes that need it, but guarantees resources to critical processes.

Secondary Flash player

To ensure that a newly introduced application does not introduce problems to our system, we chose to implement a secondary Adobe Flash player. The player is reserved for untrusted applications; that is, for applications that we have not confirmed will run cleanly without adverse effects on the reliability and performance of other applications, or of the system as whole.

Like all Flash players, this secondary player runs in its own virtual machine environment, and hence can be neatly separated from the rest of the system.  Applications in this secondary player’s virtual machine environment can not deprive applications in the primary player or other components in the system of the resources they require.

This simple approach allows us to try virtually any Flash application we choose, without worrying that it might bring down the system. In fact, any developer could write an application and run it on this secondary player without danger. He could use adaptive partitioning to ensure that the secondary Flash player did not interfere with applications in the primary Flash player, and that the applications this secondary player ran would not starve the system.

For a more detailed description of QNX PPS, see Ben VandenBelt, "Persistent Publish/Subscribe for Embedded Industrial Applications". QNX Software Systems, 2010. www.qnx.com.

Partitioning

Resource partitions are the traditional method implemented in OSs to protect different applications or groups of applications from each other. They are virtual walls that prevent one application from corrupting another application, or from starving it of  resources. The primary resource protected by partitions is CPU time, but partitioning can be used to protect other shared resources, such as memory or file space (disk or flash).

Fixed partitioning guarantees that processes will get the resources specified by the system designer, but lacks flexibility.

Rigid partitions

Traditional, rigid partitions are relatively easy to set, and they are effective; they guarantee that each application receives the resources specified by the system designer. They may not always be the best solution in embedded systems, such in-vehicle systems, however.

In-vehicle systems are embedded systems, which means that the boards they run on are usually constrained by power, heat and cost considerations. Power requirements are less stringent for in-vehicle systems than for consumer devices, such as smart phones. On the other hand, however, these systems are expected to run for the lifetimes of their vehicles, during which they can expect a steady increase in the load they are expected to handle, as more and more technologies, capabilities and applications become available.

In short, an in-vehicle system OS must not only be able to guarantee resources to critical processes, it must also not waste resources. When an application requires more resources than the original design foresaw, the OS must find a way to not provide them. It must not leave resources waiting unused because they are reserved for critical processes. It must offer them to whatever process needs them, but be able to get them back when they are need to ensure the system always meets designed reliability and performance requirements.

Adaptive partitioning

To meet these apparently conflicting requirements typical of embedded systems, the QNX  Neutrino Real Time OS (RTOS) implements adaptive partitioning. Adaptive partitioning is more flexible than traditional fixed partitioning models. It guarantees time to specified processes, just like traditional partitions. However, unlike traditional partitions, which are fixed, adaptive partitioning automatically adapts partitions to runtime conditions.

An adaptive partition is a set of threads that work on a common or related goal or activity. Like a static partition, an adaptive partition has a budget allocated to it that guarantees its minimum share of the CPU's resources, but it responds to dynamically changing runtime loads in order to always make full use of all available CPU cycles.

Adaptive partitioning is more flexible than rigid partitions. It is a set of rules that protect specified threads and groups of threads, and is an excellent solution for dynamic embedded systems.

First, unlike a static partition, an adaptive partition is not locked to a fixed set of code in a static partition; developers can dynamically add and configure adaptive partitions, as required.

Second, an adaptive partition behaves as a global hard realtime thread scheduler under normal load, but can still provide minimal interrupt latencies even under overload conditions. Third, with adaptive partitioning, the thread scheduler distributes a partition's unused budget among partitions that require extra resources when the system isn't loaded.

Adaptive partitioning is thus ideal for dynamic systemssystems that must perform consistently despite rapidly changing demands made on them.

Conclusion

The advent of the connected car brings in-vehicle systems into the rapid delivery cycle of consumer device technologies. Automobile manufacturers and tier one suppliers can design their in-vehicle systems both to mitigate the challenges of reconciling the development cycles of software and steel, and to open the door to new opportunities.

Our experience with QNX CAR suggests that a system with an Adobe Flash Lite user interface communicating with underlying components through PPS messaging is the most effective solution. It facilitates user interface branding, localization and customization without impact on underlying components, and it simplifies the addition (both during development and in the future) of new applications and components.

Running a secondary Flash player allows untrusted applications to be downloaded and used while protecting other applications and the system.

The QNX Neutrino RTOSs adaptive partitioning technology is also available to make maximum use of available computing power; more flexible than rigid partitions, adaptive partitioning offers available CPU cycles to processes that can use them, while guaranteeing resources to critical processes and applications.

So much of the QNX CAR system is the RTOS that is not applicable to the Exploratory Project since it is Windows based.  Just the portions about easily adding or moving applications seems directly applicable and similar.  So it is an example of the usefulness of this ability.  While, the adaptive partitioning might be something that could be added to an aircraft system is it could only be used for Level C or D applications.  I would suppose that it could never be used for more critical applications so such an operating system would need to limit adaptive scheduling to particular applications.