Saturday, September 11, 2010

Some Commentary on the “TALIS Airborne Architecture Recommendation Document”

Some Commentary on the “TALIS Airborne Architecture Recommendation Document”


From the “TALIS Airborne Architecture Recommendation Document” TALIS-WP5-AAR-0016, http://talis.eurocontrol.fr/pub/d5.2%20TALIS-WP5-AAR-0016-v1-0.pdf page 17


3.2.1.2 Openwings Architecture

The key value of the Openwings architecture is the ability to create systems composed of completely reusable components. Openwings is an instantiation of the Service Oriented Programming (SOP) paradigm.

Openwings provides a framework for SOP that is independent of discovery mechanisms, transport mechanisms, platforms, and deployment environments. The following figure shows a high level view of the Openwings architecture.







Figure8: Openwings architecture

Openwings is a network-centric architecture and hence the diagram rests on a network. The Openwings frameworks depends on Security, Management, Context, and Install Services. The Security Service provides code, transport, and service security. The Management Service allows management of any component in the framework. The Context Service provides a deployment environment and system boundary. The Install Service provides hands-off installation and propagation of static contextual information. The Container Service handles life cycle management of components and provide a clustering capability. The Connector Service provides protocol independence for asynchronous and synchronous protocols. The Component Service provides a simple programming abstraction for SOP and discovery independence. The unit of interoperability in Openwings is a Java Interface. Well-defined interfaces and code mobility provide the core enablers of Openwings interoperability.

On top of the Openwings framework components and domain models can be built. Openwings provides some common facilities, such as data services. The Data Service is built on the Openwings core and provides an object-based abstraction for persistence mechanisms that is independent of databases.

Each element in the Openwings Core can be used independently, with little or no interdependencies. This provides the greatest flexibility to Openwings users. In cases, where one service uses another, it is designed to integrate the service, but still run without it. For instance the container services can communicate with the install services to find new components.


Also from TALIS-WP5-AAR-0016:


5.2.2.3 Messaging JMS Domains

Before the JMS API existed, most messaging products supported either the point-to-point or the publish/subscribe approach to messaging. The JMS Specification provides a separate domain for each approach and defines compliance for each domain. A standalone JMS provider may implement one or both domains. A J2EE provider must implement both domains.

In fact, most curreeliability of queues but still allow clients to send messages to many recipients. The following figure illustrates pub/sub messaging.

                                                       Figure14: Publish-Subscribe model



My commentary on the above in regards to my exploratory project:

It would seem that my exploratory project could be described as peer-to-peer; each application is the equivalent – that is the peer – of the other. Data Flow and ordinary Event Driven notify messages are not really retained until consumed – rather the most recent instance of the message is the one delivered to the Reader. But Content Filtered, N-to-One, and One-of-N are point to point and, at least Content Filtered and N-to-One, are retained until consumed.

Perhaps I should add the durable subscriptions feature so that exploratory project could have multiple subscribers that could receive each instance of a published topic. This would be somewhat like Data Flow and ordinary Event Driven notify topics but the topic instances would be queued to each subscriber/consumer such that they could receive each instance rather then just the most recent. From Figure 14, this is N-to-M. That is, multiple producers (N) and multiple consumers (M) with the consumer able to request a durable subscription.

Of course, I don’t have multiple producers in any case but to request that the consumer provide a response (N-to-One and Content Filtered). Would allowing multiple producers of a topic be desirable? That is, other than as a backup producer where the topic would be ignored/tossed unless the primary producer ceased to function. In a general system, all producers would be equal but in an aeronautics system it doesn’t seem likely except perhaps for a monitor component. That is, perhaps one or more monitor components to receive a topic from multiple suppliers to vote on the supplier to trust and cause a switch to another supplier if the designated supplier became failed. Then, the Message Controller framework could allow multiple producers of many topics with the framework discarding the redundant topic instances from all but the currently designated component. Or the consumer components could discard topics from other than the designated producer – a kind of filtering. If the later, the components need to be able to identify the source of the message – at least to distinguish one from another or to be able to match with the current accepted producer.

Putting in the monitor / accepted selector capability would be major addition to the exploratory project. Even just allowing multiple producers with the consumers not knowing from which producer a particular instance originated is a major step.

That is, the framework now has the mechanism, for Data Flow and Event Driven topics, to have M+1 copies of a topic as produced by the one allowed publishing component with any particular consumer m (of a particular application) reading whatever is the most recently published instance of the topic at the time the consumer component starts running. The M+1 copies being the thread safe mechanism so that one buffer is available to write to while each of the M consumers could be at some stage of reading the copy that was the most recently published when the consumer began its read.

For N-to-One, there are N buffers for the N producers and the one consumer receives the instance that is at the top of the FIFO queue, treats it, returns the reply and then gets the next instance from the queue.

In either case, the reader doesn’t need to move, in a thread safe manner, the topic instance from the buffer to its own memory since the buffer can’t be modified while the consumer is examining it.

For N-to-M this would no longer the case. That is, each of the N producers could be writing to its buffer while each of the M consumers could be reading any particular buffer – the one placed in its queue first. Therefore, the read buffers have to be separate from the write buffers with the data of a particular producer copied to a new read buffer as it is queued to the particular consumer. Then, the producer can produce a new instance while various consumers have yet to process the previously queued instance with some ageing mechanism to recycle buffers of queues where the consumer is too slow to keep up.

That is, it would seem, KQofS buffers are needed for each consumer depending on how old the topic can get and still be of use to the consumer and a copy of the latest instance as published by each of the N producers. KQofS could be N for each of the M consumers but still not be enough to hold one copy from each of the producers since some could publish far more often than others. Or, KQofS could be, for instance, 1 or 2 to just retain the one or two most recently published instances of the topic no matter what the source. Each consumer could specify how many instances it wants queued.

There would still need to be the extra copy of the topic data to free up the write buffer for the producer to publish again without regard to whether the previous instance had been read by each consumer. Even if had N times M+1 buffers, any receiving component could only read the most recently published instance from each of the N producers without the copy. That is, an instance would not persist until read.

Of course, in the current implementation for persistent topics a producer cannot produce again until the consumer has finished running after reading the instance of the topic – at which time the buffer is released. But it would seem that a new delivery method should lift this restriction.

Note: These comments only apply to components that are within the same application. When the topic has to be consumed by a remote component(s), there is a proxy component that transmits it over the network. Therefore, there is a copy made for transmitting and there could be multiple remote consumers of this one proxy producer.

What does this say about the current design? That is, if every component were treated as a remote component, then part of the buffering problem would be taken care of. But there would be two extra copies to be made; one to transmit the message and one from the receive buffer to publish the topic by the proxy producer in the receiving application. This would defeat one of the original goals: that of keeping the number of copies to a minimum and the CPU cycles used while copying.

Yet, a durable subscription feature would seem to be something to look into supplying as an option for components to use. And, if allow multiple producers or even if don’t, allow the consumer to specify the number of copies that can be queued.

This then becomes another version of the Content Filtered feature that I just added. Except, rather than the request and response being paired in the topic, a durable subscription topic could be produced at any time there was a buffer available and it would be queued for each consumer that had registered to read the topic.

If the consumer’s queue has become full, it would replace the oldest entry not being read (since the consumer can supply its KQofS). A queue element marked as being read has to be retained since the component could be actively examining it. The KQofS queue length could be supplied when subscribing to the topic rather than when registering the component so that there would be a separate queue for each durable subscription topic subscripted to by the consumer component.

The consumer could be event driven or any form of periodic since there would be a queue in either case and when the consumer ran it could empty the queue or just process the top of queue entry according to the needs of the component. Therefore, each of the Scheduler methods of running a component would need to be able to check if the component had a Durable Subscription queue to release any buffers that had been read and pop their topic instances from the queue.

The consumer could likewise publish one or more Durable Subscription topics as a result of the content that it produced.

Therefore, this would be more general than the version of Content Filtered that I just implemented. This would be so since a particular component could publish a Durable Subscription topic that really contained the content filtered conditions and the consumer could then publish another Durable Subscription topic that contained the filtered content. If multiple consumers subscribed to this later topic, each would receive the filtered content. And each would receive each instance if their KQofS queue length for the topic was sufficient.

The Content Filtered feature could then be modified to be implemented in the same manner except that each requestor would specify its own filter conditions. But the delivery to the requestor would be via its Durable Subscription queue. If the requestor was event driven, it would receive a Notify event for the response. Otherwise, the topics would still be in its queue when it ran periodically. So the Delivery method would be Durable Subscription but the topic would be Content Filtered with its Request and Response subtopics.

Therefore, Durable Subscription as mentioned in TALIS-WP5-AAR-0016 “TALIS Airborne Architecture Recommendation Document” seems like a good feature to add to my exploratory project. Plus a modification of the Content Filtered feature mentioned in “The AIDA System, A 3rd Generation IMA Platform, DIANA Project White Paper” to use Durable Subscription as the delivery method.

No comments: