Wednesday, February 13, 2013

Exploratory Project with Linux C# MonoDevelop Widget Events


Exploratory Project with Linux C# MonoDevelop Widget Events

Background:

It's been a while in coming but the light dawned yesterday and now widget events (e.g., button clicks) are received by the MonoDevelop WinForms solution that was imported from the Microsoft Visual C# Express project.

This is the C# code to implement a PC based ARINC-661 Cockpit Display System (CDS) application without the "Cockpit".  Call it a PCDS maybe - Personal Computer Display System.  Now it runs
A) under Windows via the GNAT Ada user applications that communicate via the A661 protocol with the C# display application and
B) under Linux Mint via the same Ada application (with a Named Pipe interface package replacing the Microsoft MS Pipe interface package and the use of special packages to interface to Linux replacing Win32Ada).  The display application runs via MonoDevelop.  Presumably the MonoDevelop display application could also be used with Windows.

Initially I used Fedora Linux.  The MonoDevelop that I used with Fedora couldn't handle the .NET 4.0 that I was using with C#.  But very little adjustment was necessary to display the WinForm that took the place of an A661 layer.  However, the application failed to receive widget (Microsoft control) events such as button clicks except for the form close (X) button.

I then tried MonoDevelop with a Windows download with the same result.

Next I installed Linux Mint Cinnamon version 14 as mentioned in the previous blog post (Exploratory Project with Linux C# MonoDevelop).  With it I was able to revert back to the WinForm .NET 4.0 code (only a change to how the queue was inherited that allowed the display threads to obtain A661 commands read from the named pipe for the particular layer that ran, of course, in their own receive threads - this to satisfy Windows problems with needing the same thread to paint the display form as to get the events).

Again the widget events weren't received by the MonoDevelop display application. 

Trying to figure out what to do I had done a number of internet searches and had come up with various bits of information.  One saying that widgets had to be wrapped inside a special event container for GTK to receive them.

I had also been playing with GTK and so got sample code running this way.  (See Linux Mint of the previous post.)  However, in attempting that I ran into the wall that WinForms and GTK shall never mix.  So I couldn't put a GTK event container such as VBox around the widgets of the form.

Finally:

Finally I found a "First steps in Mono Winforms" post (zetcode.com/gui/csharpwinforms/firststeps/) that said, yes indeedy, it is possible to get click and other events using a WinForms MonoDevelop application.  I made myself a test solution using the supplied sample code and got the OnEnter and OnClick events that it illustrated.

Now I knew there had to be a solution to my problem other than rework the display application into a Gtk# application.

The Solution:

I first tried various versions of the sample code with the form of the Exploratory Project display app.  What had worked in the standalone sample MonoDevelop solution with its MForm class derived from the Windows Form class just didn't seem to work with my form.

After tearing out a little more of my hair, the light suddenly dawned, the epiphany arrived. 

In the sample the form was just created.  That is,
 class MForm : Form {
    public MForm() {
    . . .
    }

    void OnClick(object sender, EventArgs e) {
       Close();
    }

    void OnEnter(object sender, EventArgs e) {
       Console.WriteLine("Button Entered");
    }
 }

 public static void Main() {
    Application.Run(new MForm());
 }

Whereas in the A661 Exploratory Project case it was created and then set (temporarily) to disabled.

  public class UserForm : Form
  {
    . . .
    public UserForm(int formIndex)
    {
      initializeForm(formIndex);

    } // end UserForm constructor

    private void initializeForm(int formIndex)
    {
      . . .
      if (!Display.widgetsCreated[formIndex])
      {
        widgetIndex = formIndex;
        widgetId = Display.formDesc[formIndex].formId;
        layerId = Display.formDesc[formIndex].layerId;
        layerIndex = Display.formDesc[formIndex].layerIndex;

        buttonCount = 0;
        checkBoxCount = 0;
        listBoxCount = 0;
        labelCount = 0;
        textBoxCount = 0;

        myControl.count = 1;
        myControl.depth = 0;
        myControl.controls = new Control.ControlCollection[6];
        for (int i = 0; i < 6; i++)
        {
          myControl.controls[i] = this.Controls;
        }

        containerCount = 0;
        panelCount = 0;

        Display.widgetsCreated[formIndex] = true;
      }

      this.SuspendLayout();

      // Set widget index and id of form for lookup as another kind of widget.
      widgetIndex = formIndex;
      widgetId = Display.formDesc[formIndex].formId;

      //
      // Form
      //
      string formId, formDesc, formParentId, formBorder;
      float scaleWidth, scaleHeight;
      int sizeWidth, sizeHeight;
      int parentFormIndex = -1;
      FormDisplay.formScale scale;
      FormDisplay.formSize size;
      formId = Display.formDesc[formIndex].formId;
      formDesc = Display.formDesc[formIndex].formDesc;
      formParentId = Display.formDesc[formIndex].parentId;
      formBorder = Display.formDesc[formIndex].formBorder;
      scale = Display.formDesc[formIndex].formDimension;
      size = Display.formDesc[formIndex].formSize;
      scaleWidth = scale.width;
      scaleHeight = scale.height;
      sizeWidth = size.width;
      sizeHeight = size.height;
      int x, y;
      x = Display.formDesc[formIndex].xGet(Display.formDesc[formIndex]);
      y = Display.formDesc[formIndex].yGet(Display.formDesc[formIndex]);
      if (x != 0 || y != 0)
      {
        this.Location = new System.Drawing.Point(x, y);
      }
      else
      {
        this.Location = new System.Drawing.Point(50, 220);
      }
      this.AutoScaleDimensions = new System.Drawing.SizeF(scaleWidth, scaleHeight);
      this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
      this.ClientSize = new System.Drawing.Size(sizeWidth, sizeHeight);
      this.Name = formId;
      this.Text = formDesc;

      if (formBorder == null || formBorder.Equals("NONE"))
      {
        this.FormBorderStyle = FormBorderStyle.None;
      }

      this.Load += new System.EventHandler(this.Form_Load);

      createWidgets(formIndex);

      // determine if top level form or child
      parentFormIndex = Display.widgetLoad.getFormIndexForId(formParentId);
      if (parentFormIndex >= 0)
      {
        this.TopLevel = false; // indicate that a child
        // add child form to parent form
        Display.userForm[parentFormIndex].Controls.Add(this);
      }
      else
      {
        this.TopLevel = true; // indicate parent form
      }
      this.ResumeLayout(false);
      this.PerformLayout();
      this.Enabled = Display.formDesc[formIndex].enabled;

    } // end method initializeForm

Most of the various information used above comes from parsing the xml of the A661 Definition File that specifies what the layer (and hence the Form) is to look like.  At the end of the above code, this.Enabled was set to false since Enabled was False in the Definition File - that is,
<A661_LAYER
      DESCRIPTION="GUI1 Layer"
      LAYER_IDENT="3002"
      NET_PORT="/home/clayton/Source/EP/DisplayLayer"
      POS_X="0"
      POS_Y="0"
      SIZE_X="800"
      SIZE_Y="800"
      UNIT_NAME="GUI1"
      FORM_IDENT="3000"
      ENABLE="A661_FALSE"
      VISIBLE="A661_TRUE"
      SCALE_WIDTH="6"
      SCALE_HEIGHT="13"
      FORM_BORDER="FixedDialog">
so the form, as initialized in the call from the constructor, is initially disabled.  (Note: This xml is used to create the form that takes the place of the layer in the PC implementation.)

The form becomes enabled when the display app (the PC version of the A661 CDS) receives the A661 Enable Layer command. 

I changed the statement at the end of initializeForm to
     this.Enabled = true;
to have the form enabled from the start. 

With this change, the button clicks were received and hence were forwarded as Event Messages to the user app (app1) via the named pipe.  The user app then routed them to the Widget Action (WA) component associated with the widget identifier as illustrated in the diagram of the last week's post.  The Widget Action component then created the command to modify the display and it was transmitted back to the display app and the display was modified.

Therefore, MonoDevelop is only allowing widget events from a form (except for the form close widget in the upper right corner) when the form is enabled when it is created and isn't recognizing a later enable of the form.

Consequences:

I had set ENABLED in the xml to FALSE for the form so that all the widgets of the layer (that is, form) would be disabled until the user application sent the command to enable the layer.  (Note: Under A661, to be enabled, a child widget of a container must be enabled and its parent container must be enabled all the way back to the top level container which in this case is the form.  Therefore, with the form disabled, all the widgets were disabled.)

Needing to have the form enabled from the beginning for the widget events to be received meant I had to find a different way to have the upper level container disabled until the layer is commanded to be enabled.

Volia!  I just happened to have the solution designed in from the start.  That is, my Definition File xml has just such a container as follows.

<A661_LAYER
      DESCRIPTION="GUI1 Layer"
      LAYER_IDENT="3002"
      NET_PORT="/home/clayton/Source/EP/DisplayLayer"
      POS_X="0"
      POS_Y="0"
      SIZE_X="800"
      SIZE_Y="800"
      UNIT_NAME="GUI1"
      FORM_IDENT="3000"
      ENABLE="A661_FALSE"
      VISIBLE="A661_TRUE"
      SCALE_WIDTH="6"
      SCALE_HEIGHT="13"
      FORM_BORDER="FixedDialog">
      <WIDGET
            DESCRIPTION="GUI1 Layer"
            WIDGET_TYPE="A661_BASIC_CONTAINER"
            WIDGET_IDENT="3101"
            PARENT_IDENT="3000"
            ENABLE="A661_FALSE"
            VISIBLE="A661_TRUE"
            POS_X="0"
            POS_Y="0">

That is, I have had a container as the direct child of the form that represents the layer.  This container has widget 3000 as its parent which is the widget that has that identifier.  No other widget of the xml references 3000 as its parent.  The direct children all reference the layer container widget 3101.  Just what is needed. 

So all it took was to change the ENABLE of this container widget to FALSE in the Definition File as shown above.  This made all the child widgets disabled (grayed out) when the form is first displayed.  The receive of the Make Layer Enabled command was already implemented to make this widget enabled along with the form.

So the solution to the lack of receiving widget events was quite simple once I figured out the reason that the events weren't being received.  Just, oh the hide and seek game in the meantime.

Now the xml can be changed once more to set ENABLE of widget 3000 (the form) to TRUE and restore the initializeForm statement back to using the value from the Definition File.

This was done and all is happy.  The MonoDevelop now sets the initial form enable as before but the xml has that enable should true.  What wouldn't work before now works with only these two changes to the Definition File.


Tuesday, February 5, 2013

Exploratory Project with Linux C# MonoDevelop



Overview:

After taking a month long vacation I got the Linux version of the C# display app to interface, via the A661 protocol, with the Ada user app that sends commands to the display app to modify what's displayed and receives events from the display app that specify the operator's interaction with the display.

This post will concentrate on the changes to the display app and the user app to run under the Linux operating system.  These are mostly involved with the use of Linux named pipes rather than MS Pipes used for the transfer of data using Windows. 

After both applications (separately launched processes) were able to communicate to make the A661 display layer active and enabled, as requested by the user app after power up, I found that button clicks were not received by the display app so such events could not be forwarded to the user app.  Thus the operator (myself) couldn't cause the display to be modified.  The display was only modified by the dynamic commands from the user app to hide or make visible various buttons and to update the date and time widget (i.e., control).  This problem will be the subject of another post.

First:

In the previous post "Linux C# Mono for Display App" of the first part of December of a month and a half ago, I mentioned that the concurrent queue class was no longer available in the MonoDevelop release.  This didn't prove to be a problem.  Instead of declaring the "queue" instance as
  ConcurrentQueue<queueType> queue = new ConcurrentQueue<queueType>();
for a concurrent queue, the queue instance was changed to
  Queue Q = new Queue();
  Queue queue = Queue.Synchronized(Q);
without using the formatted queueType.

Even though the queue no longer used the queueType, my Enqueue and Dequeue methods still worked as they did in the Windows display app.

Also, the launch of the user app from the display app worked whereas I thought there would be a problem.  This is the case if the display app is run from a Linux terminal.  The launched user app can have problems if launched from the MonoDevelop application running the display app mono solution.  So I created a script (runcds) to start fresh each time from a terminal window.
#!/bin/bash
# delete the named pipes created by the display and user apps
rm /home/clayton/Source/EP/app1to2
rm /home/clayton/Source/EP/app1to3
rm /home/clayton/Source/EP/app2to1
rm /home/clayton/Source/EP/app2to3
rm /home/clayton/Source/EP/app3to1
rm /home/clayton/Source/EP/app3to2
rm /home/clayton/Source/EP/DisplayLayer1App01to15
rm /home/clayton/Source/EP/DisplayLayer1App15to01
# run the display app that runs the user apps
mono /home/clayton/Source/EP/CDS/A661/bin/Debug/A661CDS.exe

Sideshow - Import of dll:

When thinking that I would need to create my own C code to be invoked from the Mono code to do implemented Microsoft C# features that the Windows display app was using, I found out how to create my own dll and import it into the MonoDevelop code.  Afterwards this proved to have been unnecessary since I never had to write my own named pipe interface.  However, I will briefly outline the technique here.

1) The C function header to be part of the dll is created as name.h where name is the creator's choice and the C function header contains the function declarations.  Of course, the name should not match an existing name in the root lib folder. 

For example, I named the file external_interface.h with the contents.

#ifndef EXTERNAL_INTERFACE
#define EXTERNAL_INTERFACE

// File external_interface.h
//
// This header provides the C interface from which to create
// libexternalinterface.so as the dll to use to support the C#
// applications.

int Invoke(int Method, int Count, char* Name );

int Test(int Method);

#endif // EXTERNAL_INTERFACE

2) The C function code is created as name.c.  It contains the function code for the functions of the header and any needed private functions.  It must also contain the _init() and _fini() functions.  For example

// File external_interface.c

#include "stdio.h"
#include "external_interface.h"

int LogOpened = 0;
int LogHandle = -1;

int Invoke(int Method, int Count, char* Name)
{
  int Result = Method;
  printf("Inside Invoke\n");
  if (LogOpened == 0)
  {
    LogHandle = creat("/home/clayton/Source/EP/CDS/C.log",O_WRONLY);
    if (LogHandle < 0)
    {
      return -2;
    }
    else
    {
      LogOpened = 1;
      write(LogHandle,"Inside Invoke",13);
    }
  }

  if (Method == 1) // make pipe for read only
  {
    char path[Count+1];
    int i;
    for (i = 0; i < Count; i++)
    {
      path[i] = (char)(Name[i]);
    }
    path[Count] = 0;
    printf("path: %u %s\n", Count, path);
    if (LogHandle >= 0)
    {
      write(LogHandle,path,Count);
      close(LogHandle);
    }
    return mkfifo(&path, 1);
  }
  else
  {
    return -3;
  }
}

int Test(int Method)
{
  int xxx = 1000;
  if (Method == 0)
  {
    xxx = 10;
  }
  else if (Method == 1)
  {
    xxx = 100;
  }
  return xxx;
}


void _init()
{
   printf("Inside _init()\n");
}
void _fini()
{
   printf("Inside _fini()\n");
}

There is a problem with the passing of strings, for instance, from C# to C.  (See Invoke function.) I finally worked it out with one method in the above example.

3) Compile the shared library as
  $ gcc -fPIC -c name.c
  $ ld -shared -soname libname.so.1 -o libname.so.1.0 -lc name.o

Note:  The leading $ on the above lines indicates that running in a Linux terminal window user the creator's login.  Also that "name" of -soname is not in italics.  That is, to be used as is rather than replaced by the dll creator's name.

The dll is created via the pair of commands of this step.  The suffix so is used in Linux instead of dll.  There is libname.so.1 and libname.so.1.0.  The Linux so extension stands for shared object.  name.o of the second line is the object output by the gcc compile line.

4) Install the shared library by copying  libname.so.1.0 to /usr/lib or /usr/local/lib and switching to super user (su) and changing the directory, i.e., cd /usr/lib or /usr/local/lib, to the lib folder.  Then,
   #ldconfig -v -n /usr/lib or
   #ldconfig -v -n /usr/local/lib

Note:  The leading # indicates that now running as the super user.

5) Create the link for the linker name as
   #ln -sf libname.so.1 libname.so

The creator's MonoDevelop class can be represented by the following External_Interface class of external_interface.cs.  (Note: lower case first letters for file names is somewhat standard usage in Linux.  GNAT, for instance, expects it.)

using System.Runtime.InteropServices;

namespace A661CDS
{
    unsafe

    public class External_Interface
    {
        public External_Interface ()
        {
        } // end constructor

        [DllImport("/usr/lib/libexternalinterface.so.1")]
        private static extern int Invoke(int Method, int Count, byte[] Name);

        public static int Invoke_mkfifo(char[] path, int mode)
        {
            ASCIIEncoding encoding = new ASCIIEncoding();
            byte[] arr = encoding.GetBytes(path);

            int Result = Invoke(1, path.Length, arr);
            return Result;
        } // end method Invoke_mkfifo

    } // end class External_Interace

} // end namespace A661CDS

The DllImport provides the path to libexternalinterface where externalinterface is the name portion of the steps above to create a dll.

Other Mono classes can then reference External_Interface Invoke_mkfifo to access the Invoke function of the creator's dll.  Passing a Mono char array to C is also illustrated.

Gtk Problems:

After getting the A661 interface between the display and user apps running I found that the Exploratory Project display app (as built with MonoDevelop) wouldn't receive button click events.  While attempting to get button clicks to work I ran into problems with Gtk and Gtk# when I tried to do some test MonoDevelop examples found via internet searches.  These will be discussed more in another blog post. 

But suffice it to say for now, attempts to build the examples with my Fedora 17 installation (with Gtk and Gtk# downloaded and installed) failed with Mono giving error messages that it didn't recognize references to Gtk even with "using Gtk;" specified.  Same type of thing with the MonoDevelop for Windows that I downloaded on a desktop Windows 7 PC.  (This latter download, without looking into it to much, seemed to only run in a DOS window.  I expected better.) 

Using the Windows MonoDevelop download and the test case helloworld.cs from the internet of
using System;
using Gtk;

public class GtkHelloWorld
{
     
  public static void Main()
  {
    Console.WriteLine("Hello World");    
  }
 
}

Note:  Gtk is referenced by the using but isn't necessary for the code.

Using the downloaded mcs.bat file as instructed by the Mono site, the DOS command "mcs MONO_OPTIONS helloworld.cs" wouldn't compile.  Same with the command
"mcs -pkg:gtk-sharp-2.0 helloworld.cs" as given by the Mono site.  Instead it output errors for various dll files, such as gtk-sharp.dll, not being found.  I finally determined that the Mono site was giving Linux instructions rather than instructions for the use of the Windows download.

I then made my own mcs.bat file (in my own folder that I had created for the example test helloworld.cs file) as follows:
@echo off
echo %MONO_OPTIONS%
set MONO_OPTIONS=
echo mono_options %MONO_OPTIONS%
echo next1
set prefix=C:\PROGRA~2\MONO-2~1.9
set exec_prefix=%prefix%
set libdir=%exec_prefix%\lib
set dlldir=%libdir%\mono\gtk-sharp-2.0
set MONO_OPTIONS=/r:%dlldir%\pango-sharp.dll /r:%dlldir%\atk-sharp.dll /r:%dlldir%\gdk-sharp.dll /r:%dlldir%\gtk-sharp.dll
echo %MONO_OPTIONS%
echo next2
@"C:\PROGRA~2\MONO-2~1.9\bin\mono.exe" "C:\PROGRA~2\MONO-2~1.9\lib\mono\2.0\mcs.exe" %MONO_OPTIONS% %*

where the echo commands were done to check what the settings were.  The last set command and the last line are both shown as two lines in the above text since they can't being displayed as one line.  But they are both one line in the .bat file.  Note:  C:\PROGRA~2\MONO-2~1.9 is the DOS contraction of the Windows folder into which Mono was installed.

The last line supplies the MONO_OPTIONS as the arguments for mcs.exe in the Windows DOS manner.  The MonoDevelop site document had MONO_OPTIONS prior to invoking mcs (that is, "mcs MONO_OPTIONS helloworld.cs" so DOS was using MONO_OPTIONS (or -pkg:gtk-sharp-2.0) as the arguments for mono.exe (I imagine) rather than for mcs.exe.  This caused the dlls not to be found.

Linux Mint:

I then installed 32-bit Linux Mint version 14 "Nadia" Cinnamon on the desktop PC.  And what a joy it has turned out to be in the very limited time that I have used it.  More on that in another blog post.  For now, I then downloaded MonoDevelop to Linux Mint and installed it.  With that, MonoDevelop runs as a visual app as it had with Fedora while it had no problems with finding Gtk for the test solutions. 

And, after yet another internet search, I found a reason for the lack of receipt of button click events and was able to do another example – after making some modifications to get it to compile. 

The event problem was due to MonoDevelop needing a special kind of container widget in which to place child widgets from which events were to be received.  I will next attempt to apply this to the Exploratory Project and report in a future blog post.  In the meantime, here is the code of the modified example.

using System;
using Gtk;

namespace test3
{

  public class GtkHelloWorld
  {

    public static void Main()
    {
      Application.Init();

      // Create the Window
      Gtk.Window myWin = new Gtk.Window("GTK# Button Click Application");
      myWin.Resize(200,200);

      RadioButton rbt1 = new RadioButton(null, "rbt1");
      RadioButton rbt2 = new RadioButton(rbt1, "rbt2");
      RadioButton rbt3 = new RadioButton(rbt1, "rbt3");

      // Create container for event widgets to allow events to be received
      VBox vbx1 = new VBox();

      // Add the radio buttons to the container
      vbx1.PackStart(rbt1, false, false, 0);
      vbx1.PackStart(rbt2, false, false, 0);
      vbx1.PackStart(rbt3, false, false, 0);

      // Create a label and put some text in it.    
      Label myLabel = new Label();
      myLabel.Text = "Button Click Test!!!!";

      // Add the label to the form    
      vbx1.PackStart(myLabel);

      myWin.Add(vbx1);

      // Show Everything    
      myWin.ShowAll();

      // Add radio button click event handlers
      rbt1.Clicked += delegate(object sender, EventArgs e)
                      {
                        if (rbt1.Active)
                        { // don't display when becomes inactive
                          Console.WriteLine ("rbt1.Clicked");
                        }
                      };
      rbt2.Clicked += delegate(object sender, EventArgs e)
                      {
                        if (rbt2.Active)
                        {
                          Console.WriteLine ("rbt2.Clicked");
                        }
                      };
      rbt3.Clicked += delegate(object sender, EventArgs e)
                      {
                        if (rbt3.Active)
                        {
                          Console.WriteLine ("rbt3.Clicked");
                        }
                      };

      // Run the application
      Application.Run();  

    } // end method Main

  } // end class GtkHelloWorld

} // end namespace test3

Notice, that although the internet response that reported the need for the special event widget said that each button (for instance) for which a click event was to be received had to be a child of its own event widget, the example is able to have four widgets added to it as child widgets - three of which are radio button widgets for which click events are to be received.  The MonoDevelop build reported an error when the label widget was added to the window rather than to the special VBox widget (as a sample event widget).  The error was that the window could only have one widget.  No error when the label was also added to the VBox widget and it was the only widget of the window.

Finally - Interface via Named Pipes between MonoDevelop app and Ada app:

In the Exploratory Project display app I cloned the MS_Pipe class as a Named_Pipe class and found that I got exceptions trying to open the pipes.  Hence my thinking that I would need to provide my own dll (so) to implement named pipes.

It turned out that the original error was due to MonoDevelop not supporting the PipeTransmissionMode.Message option that I had used with MS_Pipe of Windows.  Switching to
  pipeServerXmit = new NamedPipeServerStream(
                    pipeName,
                    PipeDirection.InOut,       // The pipe is bi-directional
                    1,                         // MaxAllowedServerInstances,
                    PipeTransmissionMode.Byte, // Byte-based communication
                    PipeOptions.None,          // No additional parameters
                    Display.BUFFER_SIZE,       // Input buffer size
                    Display.BUFFER_SIZE );     // Output buffer size
with the use of Byte rather than Message allowed the pipe stream to be created. 

Note:  In the Exploratory Project two named pipes are used for each pair of apps.  One to be used by the first app to send A661 messages to the second app which the second app uses for its receive from the first.  The other named pipe is used by the second app to send A661 messages to the first app where the first app does its receive using this second named pipe.  This can be observed in the runcds bash script provided at the beginning of this post.  That is,
  rm /home/clayton/Source/EP/DisplayLayer1App01to15
  rm /home/clayton/Source/EP/DisplayLayer1App15to01
where the A661 user app (the Ada app) is identified as app 1 and the display app (the MonoDevelop app) is identified as app 15. 

Note:  The path part of the name indicates where the pipe "file" will appear via an ls command (dir in DOS).

The names have been created to indicate the direction of the message transmission.  In this case, the number of the A661 display layer (that is, 1) is also included to allow different pipe pairs to be created for communication for different layers.  The display app treats all the A661 layers while different user apps can treat events of a particular layer and provide commands for what that layer is to display.  For instance, apps 2 and 3 are anticipated in the runcds bash script which can exist in the Exploratory Project but, for which, the display app hasn't been fully implemented for A661 although previously implemented for a simple text protocol.

With the modification from the use of PipeTransmissionMode.Message to PipeTransmissionMode.Byte the problem became to implement a generic Named_Pipe Ada package to replace the MS_Pipe package of the Windows user app design illustrated by the "A661 Component Interface" figure that follows.


In the figure, there is a GUIn package (for instance, GUI for Layer 1 and named GUI1 in the Ada code) to be the A661 layer manager for each display layer treated by a particular user app.  Each GUIn package instantiates an instance of the MS_Pipe package to be the driver for the pipe.  Each instance will open each of the two pipes of the pipe pair. 

The MS_Pipe package is similar to a C# class where the instantiation creates an object with its own data but using the common code.  The Transmit method can be called directly by the GUI1 since no separate thread is needed.  However, each instantiation creates its own Receive thread via the framework.  Therefore, individual threads are able to wait for a message via their receive pipe. 

The received messages must be provided to the correct GUIn package for treatment.  This is done by supplying a callback to a GUIn package method to queue the received message when the generic package is instantiated.  After reading the pipe, the particular Receive thread invokes the supplied callback that executes within the particular GUIn package while running in the Receive thread to queue the message and then send a wakeup event to itself (since the Receive thread is the thread that is currently running) to enter its wakeup callback and read the queue.

The cloned MS_Pipe package was then reworked to treat Linux named pipes as the Named_Pipe generic package and associate the correct pipe name for Named_Pipe transmit and for Named_Pipe receive.

With these changes, the pipes between the user app and the display app connected and the GUI1 package sent the initial A661 command to enable layer 1.  Layer 1 then became enabled so that the widgets became enabled rather than just visible according to the xml of the Definition File.  The display app then sent its A661 acknowledgement that the layer was enabled and visible. 

GUI1 then allowed the command messages generated by the Dynamic Action (DAs in the figure) components to be transmitted.  The implemented dynamic actions output the current date and time to a particular widget of the Definition File and output hide and make visible commands to various button widgets of the Definition File.

Thus the displayed layer dynamically changed.

However, the buttons that the operator clicked upon to send widget action events to the user app failed.  The MonoDisplay app didn't receive the events and so didn't send them to the user app via the named pipe.  Not being received, the Layer Management GUI1 couldn't forward them to the particular Widget Action (WA) component that the application had associated with the widget identifier contained in the widget event message.  As mentioned in the Overview, this problem will be discussed in another post.