New Results

From Score

Jump to: navigation, search

Contents

Business Process Management - Service Oriented Computing

Introduction

Processes have received a lot of attention in the last decade and succeeded in proposing workflow solutions for office automation. The topic is subject again to a lot of interests carried by the expansion of business on the Web, but with the need to satisfy new application requirements and execution contexts. We are interested in different aspects of process engineering: the introduction of the flexibility requested to model the subtlety of user interactions in creative applications; modeling and implementing Quality of Services properties (time, security, ...constraints); composing existing process fragments of different nature and models; decentralizing a global process for a distributed execution with organizational constraints; process governance. Most of these aspects are considered in the frame of Web services and/or peer to peer architectures.

Task Delegation in Human-Centric Processes

Participants: François Charoy, Khaled Gaaloul

One type of transparency and control supporting mechanism in human-centric decentralized collaboration is that of task delegation. In this objective we have deepened this concept in the context of human-centric collaborative workflows. In general, we investigate additional delegation requirements regarding the specification of advanced security and privacy mechanisms. It addresses the modeling and mapping of access rights to tasks and respective delegation and revocation of tasks. This work was conducted in the context of an R4eGov case study to identify the key distinguishing factors regarding collaboration as opposed to coordination. A Task Management system has been developed; it can be configured with different task models supporting different kinds of behaviors, including delegation among different organizations [22] , [24] , [40] .

Composing Services with Time Constraints

Participants: Claude Godart, Nawal Guermouche, Olivier Perrin

We propose a framework for analyzing the choreography compatibility of a set of services supporting asynchronous communications and taking into account data flow and constraints over data involved when exchanging messages. Especially, we consider timed properties that specify delays to exchange messages. By studying the possible impacts of timed properties on a choreography, we remarked that when the Web services are interacting together, implicit timed dependencies can be inferred and give rise to implicit timed conflicts.

As most related works study choreographies of synchronous messages, we propose new formal primitives and a model checking process to discover deadlocks in the context of asynchronous data exchanges [26] , [25] . This work is implemented as an extension of the UPAAL environment.

Process Control Flow Decentralization and Distributed Enactment of Cross-Domain Service-Oriented Processes

Participants: Claude Godart, Walid Fdhila

Web service paradigm and related technologies have provided favorable means for the realization of collaborative business processes. From both conceptual and implementation points of view, the business processes are based on a centralized management approach. Nevertheless, it is very well known that the enterprise-wide process management where processes may span multiple organizational units requires particular considerations on scalability, heterogeneity, availability and privacy issues, that in turn, require particular consideration on decentralization. For this purpose, we proposed a methodology for transforming a centralized process specification into a form that is amenable to a distributed execution and incorporated the necessary synchronization between different processing entities. The developed approach is applicable to a wide variety of service composition standards that follow the process management approach such as WS-BPEL. It has the advantage of being flexible that it computes the abstract constructs and provides a generalized approach to the decentralization of processes. Our approach [20] , [21] is based on the computation of very basic dependencies between process elements that provides a considerable level of understanding. The computation of basic dependencies has led, in turn, to the re-implementation of the semantics of a centralized specification with peer-to-peer interactions among the derived decentralized process specifications.

A Declarative Approach to Web Services Computing

Participants: Olivier Perrin, Ehtestam Zahoor

Web services composition and monitoring are still highly active and widely studied research directions. Little work however has been done in integrating these two dimensions using an unified framework and formalism. Classical approaches introduce an additional layer for handling the composition monitoring and thus do not provide the important execution time violations feedback to the composition process. This year, we proposed the DISC framework which aims to provide a highly declarative event-oriented model to accommodate various aspects such as composition design and exceptions, data relationships and constraints, business calculations and decisions, compliance regulations, security or temporal requirements. Then, the same model is used for combining the control of the composition definition, its execution and the composition monitoring. We proposed a service oriented architecture with a flexible logic, including complex event patterns and choreographies, business oriented rules, and dynamic control of compositions. Advantages of this unified framework are the higher level of abstraction to design, execute, and reason upon a composition, the flexibility of the approach, and the ability to easily include non-functional requirements such as temporal or security issues. This work has been presented in [38] , [37] and we are in the process of implement the DISC framework using the Discrete Event Calculus reasoner.

The DISC framework is both an extension and a complete rewrite of a previous work on a pattern based strategy, called Mashup Processing Network (MPN) for building and validating mashups [77] . The idea was based on both process patterns and on Event Processing Network. It was supposed to facilitate the creation, modeling and verification of mashups.

We also continued the previous work initiated within the Associate Team INRIA VanaWeb about the provisioning of Web services composition using constraints solvers. The approach consists in instantiating this abstract representation of a composite Web service by selecting the most appropriate concrete Web services. This instantiation is based on constraint programming techniques which allow matching Web services according to a given request. The proposal performs this instantiation in a distributed manner, i.e., the solvers for each service type are solving some constraints at one level, and they are forwarding the rest of the request (modified by the local solution) to the next services. When a service cannot provision part of the composition, a distributed backtrack mechanism enables to change previous solutions (i.e., provisions). A major interest of this approach is to preserve privacy: solutions are not sent to the whole composition, services know only the services to which they are connected, and parts of the request that are already solved are removed from the next requests [70] .

Process Change Management

Participants: François Charoy, Karim Dahmen, Claude Godart

In the continuation of work done previously on change management during process execution, we are conducting work on the governance of change at the business level and on its implications at the architecture and infrastructure level. Following the work of Karim Dahmen's Master thesis [67] , we are working on the analysis of the impact of a business change, generated by process execution monitoring to the whole transformation chain, from the business level (e.g. process models) to the IT level (e.g. architecture).

Crisis Management Processes

Participants: François Charoy, Joern Franke

As said before, crisis management is a very promising domain to investigate new approaches in the domain of high value, human driven activity coordination. Our work can benefit from a large amount of use cases and detailed accounts of previous dramatic events to analyze requirements and confront our proposals. We have already invalidated the use of BPM system to support such coordination and we have started to develop a model that should be ready for first experimentation during the coming year. This model is founded on a distributed network of activities with advanced governance rules at the activity level. This work is conducted as a cooperation with SAP Research Sophia Antipolis and partially funded by a CIFRE Grant.

Security Meta-Services Orchestration Architecture

Participants: Aymen Baouab, Olivier Perrin, Nicolas Biri, Claude Godart

SOA have been deployed as a mean to offer a better flexibility, to increase efficiency through reuse of services and also to improve interoperability by providing new opportunities to connect heterogeneous platforms. However, those benefits make security more difficult to control. Fortunately, new standards are proposed to treat this issue, but their current use makes the architecture much more complex and challenges the characteristics of SOA. In this paper, we address this issue by separating security services from business ones and organizing the architecture referring to the principle of separation of concerns. Next, we propose a new model which consists of three components: business services, security meta-services and an orchestration service. Then, we show that the architecture remains secure while enforcing its flexibility and agility.

Distributed Collaborative Systems - Collaborative Knowledge Building

Introduction

Distributed collaborative systems (DCS) facilitate and coordinate collaboration among multiple users who jointly fulfill common tasks over computer networks. The explosion of Web 2.0 and especially wiki systems showed that a simple distributed collaborative system can transform communities of strangers into a community of collaborators. This is the main lesson taught by Wikipedia. Even if many DCS are currently available, most of them rely on a centralized architecture and consequently suffer of intrinsic problems of centralized architectures: lack of fault tolerance, poor scalability, costly infrastructure, problems of privacy.

Our main work focused on migrating DCS to pure peer-to-peer architecture. It requires developing new algorithms in order to enable collaborative editing of complex data and massive collaboration.

This year, we made several contributions: we extended algorithms to manage complex data types such as semantic wikis, we developed an algorithm that scales in terms of number of sites and number of edits, we proposed a novel architecture for deploying wikis over structured peer-to-peer networks and we proposed an approach for easing group collaboration over shared workspaces.

Scalable Optimistic Replication Algorithms for Peer-to-Peer Networks

Participants: Pascal Molli, Pascal Urso, Stéphane Weiss

Several collaborative editing systems are becoming massive: they support a huge number of users to obtain quickly a huge amount of data. For instance, Wikipedia is edited by 7.5 million of users and got 10 million of articles in only 6 years. However, most of collaborative editing systems are centralized with costly scalability and poor fault tolerance. To overcome these limitations, we aim to provide a peer-to-peer collaborative editing system.

Peer-to-peer systems rely on replication to ensure scalability. A single object is replicated a limited number of times in structured networks (such as Distributed Hash Tables) or a unbounded number of times in unstructured peer-to-peer networks. In all cases, replication requires to define and maintain consistency of copies. Most of the approaches for maintaining consistency do no support peer-to-peer constraints such as churn while the others rely on data “tombstones”. In these approaches, a deleted object is replaced by a tombstone instead of removing it from the document model. Tombstones cannot be directly removed without compromising the document consistency. Therefore, the overhead required to manage the document grows continuously.

This year, we designed a new optimistic replication algorithm called Logoot [36] that ensures consistency for linear structures. Logoot tolerates a large number of copies, and does not require the use of tombstones. This approach is based on non-mutable and totally ordered object position identifiers. Logoot supports multiple strategies [43] to build these identifiers. The time complexity of Logoot is only logarithmic according to the document size. We evaluated and validated the Logoot scalability and compared it with tombstone-based solutions on real data extracted from Wikipedia. These data are all the modifications ever produced on thirty page among the most edited and longest pages of the Wikipedia.

Distributed Collaborative Systems over Peer-to-Peer Structured Networks

Participants: Pascal Molli, Gérald Oster, Sergiu Dumitriu

The ever growing request for digital information raises the need for content distribution architectures providing high storage capacity, data availability and good performance. While many simple solutions for scalable distribution of quasi-static content exist, there are still no approaches that can ensure both scalability and consistency for the case of highly dynamic content, such as the data managed inside wikis. Last years, we studied and proposed solution based on unstructured peer-to-peer networks. If these results were promising, the chosen architecture implies that the whole content (whole wiki data) is replicated on every peer-to-peer node. In many cases, this assumption is not acceptable. Therefore, this year, we proposed a peer-to-peer solution for distributing and managing dynamic content over a peer-to-peer structured network. The proposed solution [56] , [30] combines two widely studied technologies: Distributed HashTables (DHT) and optimistic replication. In our “universal wiki” engine architecture (UniWiki), on top of a reliable, inexpensive and consistent DHT-based storage, any number of front-ends can be added, ensuring both read and write scalability, as well as suitability for large-scale scenarios.

A first prototype has been implemented in collaboration with Rubén Mondéjar, a PhD student from Universitat Rovira i Virgili, Catalonia (Spain). The implementation is based on Damon [28] , a distributed AOP middleware, thus separating distribution, replication, and consistency responsibilities, and also making our system transparently usable by third party wiki engines. Finally, UniWiki has been proved viable and fairly efficient in large-scale scenarios.

Easy Collaboration over Shared Workspaces

Participants: Claudia-Lavinia Ignat, Pascal Molli, Gérald Oster

Existing tools for supporting parallel work feature some disadvantages that prevent them to be widely used. Very often they require a complex installation and creation of accounts for all group members. Users need to learn and deal with complex commands for efficiently using these collaborative tools. Some tools require users to abandon their favorite editors and impose them to use a certain co-authorship application. In [27] , we proposed the DooSo6 collaboration tool that offers support for parallel work, requires no installation, no creation of accounts and that is easy to use, users being able to continue working with their favorite editors. User authentication is achieved by means of a capability-based mechanism. A capability is defined as a couple (object reference, access right). If a user possesses this capability he/she has the specified right to the referenced object. The system manages capabilities for publishing and updating shared projects. The prototype relies on the data synchronizer So6 (http://www.libresource.org/ ).

Distributed Collaborative Knowledge Building

Participants: Hala Skaf-Molli, Gérôme Canals, Pascal Molli, Charbel Rahhal, Pascal Urso, Stéphane Weiss

Semantic wikis are new generation of wikis. They combine the advantage of Web 2.0 and Semantic Web. Existing semantic wikis are based on centralized architecture. This architecture is in contradiction with the distributed social process of knowledge building [62] . The objective of this research is to build peer-to-peer semantic wikis for collaborative knowledge building. We are working on the following problems:

  • Building distributed Semantic Wikis for distributed collaborative knowledge building
  • Knowledge personalization in distributed Semantic Wikis.
  • Human-Computer collaboration for collaborative knowledge building.

We propose two approaches of peer-to-peer semantic wikis: Swooki approach and DSMW approach. Both approaches are based on optimistic replication algorithms. The main difference is the replication algorithm and the supported processes.

Collaborative Knowledge Building over Unstructured Peer-to-Peer Semantic Wikis

Swooki (http://wooki.sf.net/ ) is composed of a set of interconnected Semantic Wikis servers that forms the peer-to-peer network. Wikis pages and related semantic annotations are replicated over the network. Each peer offers all the services of a semantic wiki server. Swooki is built on unstructured peer-to-peer network. A peer can join and leave the network at each moment.

Users collaborate to edit wiki pages and their related semantic annotations. A modification on a copy is executed locally, and then it is broadcasted to other peers in the network to be integrated locally at each node. The system is correct if it respects the CCI (Causality, Convergence and Intention preservation ) consistency model.

To synchronize replicated semantic wiki pages, Swooki adapts the Synchronization algorithm woot [71] . woot is designed to synchronize linear structures such as wiki pages but it is not designed to synchronize non-linear structures such as Semantic data. Semantic data forms a RDF graph. We extend the woot algorithm to synchronize semantic data and to ensure the CCI consistency model on this data [41] , [33] . Swooki integrates also algorithms that support an undo mechanism [32] for reverting any modification of any user at anytime. Swooki is the first peer-to-peer semantic wiki.

Collaborative Knowledge Building over Trusted Semantic Wikis Networks

The main objective of replication algorithms of Swooki are providing better performance and fault tolerance. Another interesting objective could be supporting collaborative modes that preserve the privacy of users. In this case, every user maintains her own semantic wiki server. She can decide to publish pages and integrated pages published by other users [31] . This is the principal of the DSMW approach. The collaboration in DSMW is based on the publish/subscribe model. The publication, the propagation and the integration of modifications are under the control of the user.

This mode of work can be generalized to communities. A community can maintain a semantic wiki server. The community can then decide to publish some pages to other communities and integrate pages published by other communities. These collaborative networks ensure autonomy of communities and preserve privacy of community. In addition, this is compatible with the social organization of knowledge networks.

To develop this system, we need algorithms to synchronize the network and algorithms to manage publication and integration of modifications. DSMW uses the Logoot [36] algorithm to synchronize the semantic wikis pages. Logoot is an optimized version of woot , it ensures the convergence and intention preservation if the causality is ensured.

DSMW uses the publish/subscribe to propagate modifications. We developed the DSMW ontology to formalize the publish/subscribe model and we developed the needed algorithms to populate this ontology [31] . We demonstrate that DSMW algorithms ensure the causality, therefore, Logoot ensures the convergence and intentions preservation in DSMW .

We have implemented these algorithms as an extension of Semantic MediaWiki . This first version of DSMW was released in October 2009 at the address http://www.dsmw.org.

Knowledge Personalization in Distributed Semantic Wikis

In semantic wikis, wiki pages are annotated with semantic data to facilitate the navigation, information retrieving and ontology emerging. Semantic data represents the shared knowledge base which describes the common understanding of the community. However, in a collaborative knowledge building process the knowledge is basically created by individuals who are involved in a social process [59]. Therefore, it is fundamental to support personal knowledge building in a differentiated way. Currently there are no available semantic wikis that support both personal and shared understandings. In order to overcome this problem, we propose a peer-to-peer collaborative knowledge building process and extend semantic wikis with personal annotations facilities to express personal understanding. In this work, we detail the personal semantic annotation model and show its implementation in distributed semantic wikis. We also detail an evaluation study which shows that personal annotations demand less cognitive efforts than semantic data and are very useful to enrich the shared knowledge base [34] , [35] , [42] . This is a joint research with the University la Plata, Argentina.

Computer-Supported Collaborative Learning

Participant: Jacques Lonchamp

In the CSCL field the collaborative situation is constructed by teachers in order to produce specific interactions that trigger learning processes. Defining a collaborative learning situation is complex and multi-faceted. It may include a structured process, structured interaction protocols, structured artifacts and ad hoc monitoring processes.

For several years we have developed a generic CSCL kernel for supporting synchronous collaborative learning applications following the dual interaction space paradigm (task space and communication space), called Omega+. Its main characteristic is an explicit representation of the four facets evoked above [52] .

This year our works concern the monitoring of synchronous collaborative learning processes (for coaching and self-guidance) [9] , post mortem analysis of synchronous collaborative learning traces [8] and the relationships between these two aspects. These proposals are illustrated by experiments with real students working in small groups with Omega+ during object-oriented modeling courses.

Interoperability

Participants: Khalid Benali, Nacer Boudjlida

In the area of inter enterprise cooperation we are faced to the classical matching problem due to the use of different models by the various cooperating enterprise. Even if this issue is a historical database problem tackled through schema integration approaches, this domain has been renewed in the context of web XML schema integration, or in MDA approaches, or ontology approaches for knowledge representation, or in enterprise modeling.

In the continuation of an initial work on semantic-based and model-based solutions for interoperability, Nacer Boudjlida and Chen Dong applied and experienced the variety of semantic annotation types (structural, terminological and behavioral) [64] , [66] in the frame of dynamic web services discovery [61] , [65] , [66] and for competence management systems [19] .

Integration and model transformation techniques, as well as common ontologies definitions allow discovering and defining semantic correspondences between an enterprise knowledge and its collaborators' one. This allows performing an integration solution or a communication solution between their systems by using specific methods such as model driven engineering ones. This model driven engineering approach is now well accepted for interoperability between enterprises. Our approach for interoperability based on an MDA architecture uses meta models mapping to ensure interoperability of application models [60] . We define the interoperability between two applications through classification of identified mapping between their respective meta models.

Khalid Benali has contributed to a work for verifying the conformity of a process model [76] .

Personal tools