# Publications

Copyright Notice: The following material has been made public by the author for timely dissemination of research findings. These articles may be protected by copyright and may require owner permission for reproduction or distribution.

PDF PS

Autonomic Computing Research Laboratory
School of Computing and Information Sciences
Florida International University
ECS 212 C
11200 SW 8th St., Miami, FL 33199

Phone: (305) 348-1835

Books

Book Chapters

Journal Articles

# Journal Articles

Refereed Conference and Workshop Proceedings

# Refereed Conference and Workshop Proceedings

 [1] J. Delgado, L. Fong, Y. Liu, N. Bobroff, S. Seelam, and S. Masoud Sadjadi. Efficiency assessment of parallel workloads on virtualized resources. In Proceedings of the 4th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2011), Melbourne, Australia, December 2011. [ bib | .pdf ] In cloud computing, virtual containers on physical resources are provisioned to requesting users. Resource providers may pack as many containers as possible onto each of their physical machines, or may pack specific types and quantities of virtual containers based on user or system QoS objectives. Such elastic provisioning schemes for resource sharing may present major challenges to scientific parallel applications that require task synchronization during execution. Such elastic schemes may also inadvertently lower utilization of computing resources. In this paper, we describe the elasticity constraint effect and ripple effect that cause a negative impact to application response time and system utilization. We quantify the impact using real workload traces through simulation. Then, we demonstrate that some resource scheduling techniques can be effective in mitigating the impacts. We find that a tradeoff is needed among the elasticity of virtual containers, the complexity of scheduling algorithms, and the response time of applications. Keywords: Efficiency Assessment, Parallel Workloads, Virtual Resources. [2] Ingrid Buckley, Eduardo B. Fernandez, Marco Anisetti, Claudio A. Ardagna, S. Masoud Sadjadi, and Ernesto Damiani. Towards pattern-based reliability certication of services. In Proceedings of the International Symposium on Distributed Objects and Applications, first International Symposium on Secure Virtual Infrastructures (DOA-SVI'11), Crete, Greece, October 2011. [ bib | .pdf ] Keywords: [3] David Villegas and S. Masoud Sadjadi. Deva: Distributed ensembles of virtual appliances in the cloud. In Proceedings of the 17th Euro-Par Conference (Euro-Par 2011), pages 467-478, Bordeaux, France, August 2011. Part I. [ bib | .pdf ] Low upfront costs, rapid deployment of infrastructure and flexible management of resources has resulted in the quick adoption of cloud computing. Nowadays, different types of applications in areas such as enterprise web, virtual labs and high-performance computing are al- ready being deployed in private and public clouds. However, one of the remaining challenges is how to allow users to specify Quality of Service (QoS) requirements for composite groups of virtual machines and en- force them effectively across the deployed resources. In this paper, we propose an Infrastructure as a Service resource manager capable of al- locating Distributed Ensembles of Virtual Appliances (DEVAs) in the Cloud. DEVAs are groups of virtual machines and their network connec- tivities instantiated on heterogeneous shared resources with QoS speci- fications for individual entities as well as their connections. We discuss the different stages in their lifecycle: declaration, scheduling, provision- ing and dynamic management, and show how this approach can be used to maintain QoS for complex deployments of virtual resources. Keywords: DEVA, Cloud Computing, Virtual Appliances [4] David Villegas and S. Masoud Sadjadi. Mapping non-functional requirements to cloud applications. In Proceedings of the 2011 International Conference on Software Engineering and Knowledge Engineering (SEKE 2011), Miami, Florida, July 2011. (acceptance rate 31%). [ bib | .pdf ] Cloud computing represents a solution for applications with high scalability needs where usage patterns, and therefore resource requirements, may fluctuate based on external circumstances such as exposure or trending. However, in order to take advantage of the cloud’s benefits, software engineers need to be able to express the application’s needs in quantifiable terms. Additionally, cloud providers have to understand such requirements and offer methods to acquire the necessary infrastructure to fulfill the users’ expectations. In this paper, we discuss the design and implementation of an Infrastructure as a Service cloud manager such that non-functional requirements determined during the requirements analysis phase can be mapped to properties for a group of Virtual Appliances running the application. The discussed management system ensures that expected Quality of Service is maintained during execution and can be considered during different development phases. Keywords: DEVA, Could Computing, Non-Functional Requirements, Software Engineering [5] Xabriel J. Collazo-Mojica and S. Masoud Sadjadi. A metamodel for distributed ensembles of virtual appliances. In Proceedings of the 2011 International Conference on Software Engineering and Knowledge Engineering (SEKE 2011), Miami, Florida, July 2011. (acceptance rate 31%). [ bib | .pdf ] We present our work on modeling distributed ensembles of virtual appliances (DEVAs) on Infrastructure as a Service (IaaS) clouds. Designing solutions on IaaS providers require a good understanding of the underlying details such as the software installation or the network configuration. We propose the use of DEVAs, a modeling approach built on top of the notion of virtual appliances, that allows easy-to-compose and ready-to-use cloud application architectures that are IaaSagnostic, and that abstract away unnecessary details for web application developers. In this paper, we extend the definition of a DEVA from previous work by presenting an underlying metamodel and how that metamodel can be transformed to an actual deployment. We also present a case study where we model a web application architecture and we discuss how we can instantiate it in an IaaS cloud. We argue that the DEVA modeling approach is suitable for typical cloud use cases. Keywords: DEVA, Could Computing, Meta Model, Software Engineering [6] Xabriel J Collazo-Mojica, S. Masoud Sadjadi, Fabio Kon, and Dilma Da Silva. Virtual environments: Easy modeling of interdependent virtual appliances in the cloud. In Proceedings of the SPLASH 2010 Workshop on Flexible Modeling Tools (SPLASH 2010), Reno, Nevada, October 2010. [ bib | .pdf ] We present our ideas for modeling groups of interdependent virtual machines in the cloud. We call these models virtual environments. This abstraction is built on top of virtual ap- pliances and the services they provide. We discuss previous attempts in this domain and present our motivations for working on an uncomplicated model for non-expert users of cloud computing such as Web developers and CS stu- dents. Visual and internal representations of the model are presented. Early work on a prototype implementation is de- scribed. We argue that easier to use models such as ours are needed for today’s and tomorrow’s distributed applications. Keywords: virtual environment, virtual appliance, flexible modeling, cloud computing. [7] Selim Kalayci, Gargi Dasgupta, Liana Fong, Onyeka Ezenwoye, and S. Masoud Sadjadi. Distributed and adaptive execution of Condor DAGMan workflows. In Proceedings of the 22nd International Conference on Software Engineering and Knowledge Engineering (SEKE 2010), San Francisco Bay, CA, July 2010. [ bib | .pdf ] Large-scale applications, in the form of workflows, may require the coordinated usage of resources spreading across multiple administrative domains. Scalable solutions need a decentralized approach to coordinate the execution of such workflows. At runtime, adjustments to the workflow execution plan may be required to meet Quality of Service objectives. In this paper, we provide a decentralized execution approach to large-scale workflows on different resource domains. We also provide a low overhead, decentralized runtime adaptation mechanism to improve the performance of the system. Our prototype implementation is based on standard Condor DAGMan workflow execution engine and does not require any modifications to Condor or its underlying system. Keywords: Application workflow, resource domain, execution, decentralization, distributed system. [8] Javier Delgado, João Gazolla, Esteban Clua, and S. Masoud Sadjadi. An incremental approach to porting complex scientific applications to GPU/CUDA. In Proceedings of the IV Brazilian e-Science Workshop, Minas Gerais, Brazil, July 2010. [ bib | .pdf ] This paper proposes and describes a developed methodology to port complex scientific applications originally written in FORTRAN to the nVidia CUDA. The process was developed and validated by porting an existing FORTRAN weather and forecasting algorithm to a GPU parallel paradigm. We believe that the proposed porting methodology described can be successfully utilized in several other existing scientific applications. [9] Javier Delgado, S. Masoud Sadjadi, Hector Duran, Marlon Bright, and Malek Adjouadi. Performance prediction of weather forecasting software on multicore systems. In Proceedings of the 24th IEEE International Parallel & Distributed Processing Symposium (IPDPS-2010), 11th Parallel and Distributed Scientific and Engineering Computing (PDSEC) workshop, Atlanta, Georgia, April 2010. [ bib ] [10] Onyeka Ezenwoye, Salome Busi, and S. Masoud Sadjadi. Dynamically reconfigurable data-intensive service composition. In Proceedings of the 6th International Conference on Web Information Systems and Technologies (WEBIST 2010), Valencia, Spain, April 2010. [ bib | .pdf ] The distributed nature of services poses significant challenges to building robust service-based applications. A major aspect of this challenge is finding a model of service integration that promotes ease of dynamic reconfiguration, in response to internal and external stimuli. Centralized models of composition are not conducive for data-intensive applications such as those in the scientific domain. Decentralized compositions are more complicated to manage especially since no service has a global view of the interaction. In this paper we identify the requirements for dynamic reconfiguration of data-intensive composite services. A hybrid composition model that combines the attributes of centralization and decentralization is proposed. We argue that this model promotes dynamic reconfiguration of data-intensive service compositions. Keywords: Service Composition Models, Scientific Workflow, Adpaptability, Dynamic Reconfiguration, Choreography, Orchestration. [11] Onyeka Ezenwoye, Balaji Viswanathan, S. Masoud Sadjadi, Liana Fong, Gargi Dasgupta, and Selim Kalayci. Task decomposition for adaptive data staging in workflows for distributed environments. In Proceedings of the 21st International Conference on Software Engineering and Knowledge Engineering (SEKE 2009), pages 16-19, Boston, MA, July 2009. [ bib | .pdf ] Scientific workflows are often composed by scientists that are not particularly familiar with performance and faulttolerance issues of the underlying layer. The inherent nature of the infrastructure and environment for scientific workflow applications means that the movement of data comes with reliability challenges. Improving the reliablility scientific workflows in distributed environments, calls for the decoupling of data staging and computation activities, and each aspect needs to be addressed separately In this paper, we present an approach to managing scientific workflows that specifically provides constructs for reliable data staging. In our framework, data staging tasks are automatically separated from computation tasks in the definition of the workflow. High-level policies can be provided that allow for dynamic adaptation of the workflow to occur. Our approach permits the separate specification of the functional and non-functional requirements of the application and is dynamic enough to allow for the alteration of the workflow at runtime for optimization. Keywords: Data Staging, Scientific Workflow, and Distributed Systems. [12] Ingrid Buckley, Eduardo B. Fernandez, Gustavo Rossi, and S. Masoud Sadjadi. Web services reliability patterns. In Proceedings of the 21st International Conference on Software Engineering and Knowledge Engineering (SEKE 2009), pages 4-9, Boston, MA, July 2009. [ bib | .pdf ] Due to the widespread use of web services by enterprises, the need to ensure their reliability has become crucial. There are several standards that intend to govern how web services are designed and implemented, including protocols to which they must adhere. These standards include the WS-Reliability and WS-Reliable Messaging standards that define rules for reliable messaging. We present here patterns for these standards which define how to achieve reliable messaging between entities. We compare their features and use. Keywords: Web Services, Reliability, and Patterns. [13] S. Masoud Sadjadi, Sandie Kappes, and Laura F. McGinnis. Grid enablement of scientific applications on teragrid. In Proceedings of the TeraGrid 2009 Conference, Arlington, Virginia, June 2009. [ bib | .pdf ] The lack of access to sufficient computational, storage, and networking resources in the past three years has proven to be the major hurdle in the rate of discovery for our GCB research projects. The TeraGrid Pathway Fellowship Program has helped us address this problem. In this presentation, we will show how this program has helped us enhance the syllabus and contents of the GCB course with the existing TeraGrid educational and training materials (e.g., the CI Tutor) so that the students taking the GCB course become able to utilize the TeraGrid resources to accelerate the rate of their findings and to be able to submit their research papers for publication within the two semesters of the GCB program. Keywords: Global CyberBridges, TeraGrid, and High-Performance Computing. [14] Yanbin Liu, David Villegas, Norman Bobroff, Liana Fong, Ivan Rodero, Seetharami Seelam, and S. Masoud Sadjadi. An experimental system for grid meta-broker evaluation. In Proceedings of the ACM Large-scale System and Application Performance workshop (LSAP2009) of the International Symposium on High Performance Distributed Computing (HPDC 2009), pages 11-18, Munich, Germany, June 2009. [ bib | .pdf ] Grid meta-broker is a key enabler in realizing the full potential of inter-operating grid computing systems. A challenge to properly evaluate the effectiveness of meta-brokers is the complexity of developing a realistic grid experimental environment. In this paper, this challenge is addressed by a unique combination of two approaches: using compressed workload traces to demonstrate the resource matching and scheduling functions of the meta-broker, and using emulation to provide a flexible and scalable modeling and management for local resources of a grid environment. Real workload traces are compressed while preserving their key workload characteristics to allow exploration of various dimensions of meta-broker functions in reasonable time. Evaluation of round-robin, queue-length, and utilization based meta-broker scheduling algorithms shows that they have different effects on various workloads. Keywords: Grid Computing, Meta-Broker, Job Scheduling, and Experimental Evaluation. [15] Juan C. Martinez, Lixi Wang, Ming Zhao, and S. Masoud Sadjadi. Experimental study of large-scale computing on virtualized resources. In Proceedings of the 3rd International Workshop on Virtualization Technologies in Distributed Computing (VTDC 2009) of the IEEE/ACM 6th International Conference on Autonomic Computing and Communications (ICAC-2009), pages 35-41, Barcelona, Spain, June 2009. [ bib | .pdf ] Parallel applications have a pressing need for the utilization of more and more resources to meet user performance expectations. Unfortunately, these resources are not necessarily available within one single domain. Grid computing provides a solution to scaling out from a single domain; however, it also brings another problem for some applications: resource heterogeneity. Since some applications require having homogeneous resources for their execution, virtualizing the resources is a noble and viable solution. In this paper, we present two parallel applications, namely WRF and mpiBLAST and report the results of different runs scaling them out from 2 to 128 virtual nodes. Later, we analyze the effects of scaling out based on the application’s communication behavior. Keywords: Large Scale Computing, Virtualized Resources, and Experimental Study. [16] Javier Delgado, Mark Joselli, Silvio Stanzani, S. Masoud Sadjadi, Esteban Clua, and Heidi Alvarez. A learning and collaboration platform based on SAGE. In Proceedings of the ACM 14th Western Canadian Conference on Computing Education (WCCCE 2009), pages 70-76, Simon Fraser University, Vancouver, Canada, May 2009. [ bib | .pdf ] In this paper, we describe the use of a tiled-display wall platform for use as a general purpose collaboration and learning platform. The main scenario of emphasis for this work is online learning by users in different countries. We describe the general efficacy of this platform for our purposes and describe its shortcomings for this purpose empirically. We discuss its advantages and also the shortcomings that we found. We also describe an enhancement made to make it more viable for our target usage scenario by implementing an interface for a modern human interface device. Keywords: Cyberinfrastructure, interdisciplinary, collaboration, e-learning. [17] S. Masoud Sadjadi, Shu-Ching Chen, Borko Furht, Pete Martinez, Scott Graham, Steve Luis, Juan Caraballo, and Yi Deng. PIRE: A global living laboratory for cyberinfrastructure application enablement. In Proceedings of the ACM Tapia Celebration of Diversity in Computing 2009 (Tapia'09), pages 64-69, Portland, Oregon, April 2009. [ bib | .pdf ] This Partnership for International Research and Education (PIRE) is a 5-year long project funded by the National Science Foundation that aims to provide 196 international research and training experiences to its participants by leveraging the established programs, resources, and community of the Latin American Grid (LA Grid, an international academic and industry partnership designed to promote research, education and workforce development at major institutions in the USA, Mexico, Argentina, Spain, and other locations around the world). In return, PIRE will take LA Grid to the next level of research and education excellence. Top students, particularly underrepresented minorities, are engaged and each participant will receive multiple perspectives in each of three different aspects of collaboration as they work with (1) local and international researchers, in (2) academic and industrial research labs, and on (3) basic and applied research projects. PIRE participants will engage not only in computer science research topics focused on transparent cyberinfrastructure enablement, but will also be exposed to challenging scientific areas of national importance such as meteorology, bioinformatics, and healthcare. During the first year of this project, 18 students out of a pool of 68 applicants were selected; they participated in complementary PIRE research projects, visited 7 international institutions (spanning 5 countries and 4 continents), and published 9 papers. [18] Masoud Milani, S. Masoud Sadjadi, Raju Rangaswami, Peter Clarke, and Tao Li. Research experiences for undergraduates: Autonomic computing research at fiu. In Proceedings of the ACM Tapia Celebration of Diversity in Computing 2009 (Tapia'09), pages 93-97, Portland, Oregon, April 2009. [ bib | .pdf ] According to Computing Research Association, between 2003 and 2007 each year fewer than 3in computer science and computer engineering are Hispanic or African American and fewer than 20under-representation not only compromises the competitiveness of the US economy, but also deepens the divide and imbalances in our society. It is therefore imperative that undergraduate institutions introduce students to graduate school at an early stage of their academic careers and to provide them with the tools necessary for the successful conduct of research in graduate programs. The School of Computing and Information Sciences (SCIS) at Florida International University (FIU) has been working to strengthen the pipeline of underrepresented students to graduate work in computer science by hosting an NSF Research Experiences for Undergraduates (REU) site for the last three years. Our REU site provided this opportunity to 30 undergraduate students, 23 of them were underrepresented including 7 females, 16 Hispanics, and 4 African Americans, who published 13 technical papers. Six of ten students who have already graduated, have started their graduate studies. [19] Selim Kalayci, Onyeka Ezenwoye, Balaji Viswanathan, Gargi Dasgupta, S. Masoud Sadjadi, and Liana Fong. Design and implementation of a fault tolerant job flow manager using job flow patterns and recovery policies. In Proceedings of the 6th International Conference on Service Oriented Computing (ICSOC'08), volume 5364/2008, pages 54-69, Sydney, Australia, December 2008. Springer Berlin / Heidelberg. (acceptance rate 20.4%). [ bib | .pdf ] Nowadays, many grid applications are developed as job flows that are composed of multiple jobs. The execution of job flows requires the support of a job flow manager and a job scheduler. Due to the long running nature of job flows, the support for fault tolerance and recovery policies is especially important, and yet complicated due to the sequencing and dependency of jobs within a flow, and the required coordination between workflow engines and job schedulers. In this paper, we describe the design and implementation of a job flow manager that supports fault tolerance. First, we identify and label job flow patterns within a job flow during deployment time. Next, at run time, we introduce a proxy that intercepts and resolves faults using job flow patterns and their corresponding fault recovery policies. Our design has the advantages of separation of job flow and fault handling logic, requiring no manipulation at the modeling time, and flexibility in fault resolution at run time. We validate our design by a prototypical implementation based on the ActiveBPEL workflow engine and GridWay Metascheduler, and Montage application as the case study. [20] Gargi Dasgupta1, Onyeka Ezenwoye, Liana Fong, Selim Kalayci, S. Masoud Sadjadi, and Balaji Viswanathan. Design of a fault-tolerant job-flow manager for grid environments using standard technologies, job-flow patterns, and a transparent proxy. In Proceedings of the 20th International Conference on Software Engineering and Knowledge Engineering (SEKE'2008), pages 814-819, San Francisco Bay, USA, July 2008. (36% acceptance rate for Full Papers.). [ bib | .pdf ] The execution of job flow applications is a reality today in academic and industrial domains. Current approaches to execution of job flows often follow proprietary solutions on expressing the job flows and do not leverage recurrent job-flow patterns to address faults in Grid computing environments. In this paper, we provide a design solution to development of job-flow managers that uses standard technologies such as BPEL and JSDL to express job flows and employs a two-layer peer-to-peer architecture with interoperable protocols for cross-domain interactions among job-flow mangers. In addition, we identify a number of recurring job-flow patterns and introduce their corresponding fault-tolerant patterns to address runtime faults and exceptions. Finally, to keep the business logic of job flows separate from their fault-tolerant behavior, we use a transparent proxy that intercepts job-flow execution at runtime to handle potential faults using a growing knowledge base that contains the most recently identified job-flow patterns and their corresponding fault-tolerant patterns. [21] Onyeka Ezenwoye and S. Masoud Sadjadi. A language-based approach to addressing reliability in composite web services. In Proceedings of the 20th International Conference on Software Engineering and Knowledge Engineering (SEKE'2008), pages 649-654, San Francisco Bay, USA, July 2008. (36% acceptance rate for Full Papers.). [ bib | .pdf ] With Web services, distributed applications can be encapsulated as self-contained, discoverable software components that can be integrated to create other applications. BPEL allows for the composition of existing Web services to create new higher-function Web services. We identified that the techniques currently applied at development time are not sufficient for ensuring the reliability of composite Web services In this paper, we present a language-based approach to transparently adapting BPEL processes to improve reliability. This approach addresses reliability at the Business process layer (i.e the language layer) using a code generator, which weaves fault-tolerant code to the original code and an external proxy. The generated code uses standard BPEL constructs, and therefore, does not require any changes to the BPEL engine. [22] Hector A. Duran Limon, S. Masoud Sadjadi, Raju Rangaswami, Shu Shimizu, Liana Fong, Rosa M. Badia, Pat Welsh, Sandeep Pattnaik, Anthony Praino, Javier Figueroa, Javier Delgado, Xabriel J. Collazo-Mojica, David Villegas, Selim Kalayci, Gargi Dasgupta, Onyeka Ezenwoye, Khalid Saleem, Juan Carlos Martinez, Ivan Rodero, Shuyi Chen, Javier Muñoz, Diego Lopez, Julita Corbalan, Hugh Willoughby, Michael McFail, Christine Lisetti, and Malek Adjouadi. Grid enablement and resource usage prediction of weather research and forecasting. In Proceedings of the Collaborative and Grid Computing Technologies Workshop, page 4, Cancun, Mexico, April 2008. [ bib ] In the last few years we have witnessed a number of devastating hurricanes around the world. It is believed that the global climate change is fuelling an increase in the magnitude and also in the average number of hurricanes and tropical storms. Therefore, there is a pressing need to provide a range of users with accurate and timely formation that can enable effective planning for and response to potential hurricane landfalls. The Weather Research Forecast (WRF) code has been adopted worldwide by meteorological services. The numerical model employed by WRF demands a large amount of computing nodes. Such demands can increase dramatically if the WRF is used to model a large geographical area with a high resolution level (e.g. < 1 km). Although WRF can be run in homogeneous clusters, it was not designed for grid environments which can potentially offer a higher amount of computing resources. The transparent Grid enablement of WRF includes carrying out intelligent brokering and scheduling. This is needed to ensure that a run of WRF will take an acceptable amount of time and to optimize the Grid resource usage. Resource usage prediction is required to achieve such brokering and scheduling. However, current approaches to resource prediction tend to address parts of the problem by either focusing on a specific application, or a specific platform, or a small subset of system resources. In this paper, we present our research on Grid enablement of WRF by leveraging our work on resource usage prediction, metascheduling and job-flow management. We report on our experience on the design and development of the La Grid WRF Portal to provide a comprehensive, but customized, Web-based user interface for meteorologist to conduct their hurricane research and to forecast hurricanes in near real-time. We pay a special focus on our approach for modelling application resource usage in a platform independent manner enabling prediction of resource usage on unseen platforms. Keywords: Grid Enablement, Scientific Applications, WRF, Portal, Meta-Scheduling, Job Flow Management, Modeling, and Profiling. [23] Ricardo Koller, Raju Rangaswami, Joseph Marrero, Igor Hernandez, Geoffrey Smith, Mandy Barsilai, Silviu Necula, S. Masoud Sadjadi, Tao Li, and Krista Merrill. Anatomy of a real-time intrusion prevention system. In Proceedings of the 5th IEEE International Conference on Autonomic Computing (ICAC-2008), pages 151-160, Chicago, IL, June 2008. (25% acceptance rate). [ bib | .pdf ] Host intrusions prevention systems for both servers and end-hosts must address the dual challenges of accuracy and performance. Researchers have mostly focused on addressing the former challenge, suggesting solutions based either on exploit-based penetration detection or anomaly-based misbehavior detection, but yet stopping short of comprehensive solutions that leverage merits of both approaches. The second challenge, however, is rarely addressed; doing so comprehensively is important for practical usability, since these systems can introduce substantial overhead and cause system slowdown, more so when the system load is high. We present Rootsense, a holistic and real-time intrusion prevention system that combines the merits of misbehaviorbased and anomaly-based detection. Four principles govern the design and implementation of Rootsense. First, Rootsense audits events within different subsystems of the host OS and correlates them to comprehensively capture the global system state. Second, Rootsense restricts the detection domain to root compromises only; doing so reduces runtime overhead and increases detection accuracy (root behavior is more easily modeled than user behavior). Third, Rootsense adopts a dual approach to intrusion detection – a root penetration detector detects activities that exploit system vulnerabilities to penetrate the security perimeter, and a root misbehavior detector that tracks misbehavior by root processes. Fourth, Rootsense is designed to be configurable for overhead management allowing the system administrator to tune the overhead characteristics of the intrusion prevention system that affect foreground task performance. A Linux implementation of Rootsense is analyzed for both accuracy and performance, using several real-world exploits and a range of end-host and server benchmarks. Keywords: Operating systems, security, rootsense. [24] Khalid Saleem, S. Masoud Sadjadi, and Shu-Ching Chen. Towards a self-configurable weather research and forecasting system. In Proceedings of the 5th IEEE International Conference on Autonomic Computing (ICAC-2008), pages 195-196, Chicago, IL, June 2008. (38% acceptance rate for Full and Short papers together.). [ bib | www | .pdf ] Current weather forecast and visualization systems lack the scalability to support numerous customized requests for weather research and forecasting, especially at the time of natural disasters such as a hurricane landfall. Most of these systems provide somewhat generic forecasts for different types of users including meteorologists, business owners and emergency management officials. Such forecast while may be relevant to some specific group of users; to others it may not provide any useful information apart from the prediction of impending weather hazards. In other words, one size does not fit all. Weather data and its visualization indicating inclement weather conditions such as snow or ice storm, tornadoes and hurricanes need to be customized for the different type of users using such systems; thus, assisting them in ensuring effective preparatory and meticulous recovery plans. In this paper, we propose a self-configurable, user specific on-demand weather research and forecasting system that utilizes Grid computing to facilitate scalable weather forecast data analysis and prediction. Keywords: Web-based portal, weather forecasting, WRF, self-configuration, ensemble forecasting. [25] Gargi Dasgupta, Onyeka Ezenwoye, Liana Fong, Selim Kalayci, S. Masoud Sadjadi, and Balaji Viswanathan. Runtime fault-handling for job-flow management in grid environments. In Proceedings of the 5th IEEE International Conference on Autonomic Computing (ICAC-2008), pages 201-202, Chicago, IL, June 2008. (38% acceptance rate for Full and Short papers together.). [ bib | www | .pdf ] The execution of job flow applications is a reality today in academic and industrial domains. In this paper, we propose an approach to adding self-healing behavior to the execution of job flows without the need to modify the job flow engines or redevelop the job flows themselves. We show the feasibility of our non-intrusive approach to self-healing by inserting a generic proxy to an existing two-level job-flow management system, which employs job flow based service orchestration at the upper level, and service choreography at the lower level. The generic proxy is inserted transparently between these two layers so that it can intercept all their interactions. We developed a prototype of our approach in a real Grid environment to show how the proxy facilitates runtime handling for failure recovery. Keywords: job-flow management, meta-scheduler, generic proxy, fault-tolerance, job-flows. [26] Yanbin Liu, S. Masoud Sadjadi, Liana Fong, Ivan Rodero, David Villegas, Selim Kalayci, Norman Bobroff, and Juan Carlos Martinez. Enabling autonomic meta-scheduling in grid environments. In Proceedings of the 5th IEEE International Conference on Autonomic Computing (ICAC-2008), pages 199-200, Chicago, IL, June 2008. (38% acceptance rate for Full and Short papers together.). [ bib | www | .pdf ] Grid computing supports workload execution on computing resources that are shared across a set of collaborative organizations. At the core of workload management for Grid computing is a software component, called meta-scheduler or Grid resource broker, that provides a virtual layer on top of heterogeneous Grid middleware, schedulers, and resources. Meta-schedulers typically enable end-users and applications to compete over distributed shared resources through the use of one or more instances of the same meta-scheduler, in a centralized or distributed manner, respectively. We propose an approach to enabling autonomic meta-scheduling through the use of a new communication protocol that –if adopted by different meta-schedulers or by the applications using them— can improve the workload execution while avoiding potential chaos, which can be resulted from blind competition over resources. This can be made possible by allowing the metaschedulers and/or their applications to engage in a process to negotiate their roles (e.g., consumer, provider, or both), scheduling policies, service-level agreement, etc. To show the feasibility of our approach, we developed a prototype that enables some preliminary autonomic management among three different meta-schedulers, namely, GridWay, eNANOS, and TDWB. Keywords: meta-scheduler, grid resource broker, grid interoperability, autonomic workload management. [27] Norman Bobroff, Liana Fong, Selim Kalayci, Yanbin Liu, Juan Carlos Martinez, Ivan Rodero, S. Masoud Sadjadi, and David Villegas. Enabling interoperability among meta-schedulers. In Proceedings of 8th IEEE International Symposium on Cluster Computing and the Grid (CCGrid-2008), pages 306-315, Lyon, France, 2008. (32% acceptance rate.). [ bib | .pdf ] Grid computing supports the harness of computing resources from cooperating organizations or institutes in the form of virtual organizations. At the core of matching the resource requests for jobs is a resource brokering middleware, commonly known as a meta-scheduler or a meta-broker. The recent advances in meta-scheduling capabilities are broadened to resource matching across multiple virtual organizations, not limiting to a single one. Different architectures have been proposed for these interoperating meta-scheduling systems. In this paper, we present a hybrid approach, combining hierarchical and peer-to-peer architectures for flexibility and extensibility of these systems. We also define a set of protocols to allow different meta-scheduler instances to communicate using Web Services. In our experiments, three remote organizations using different scheduling technologies (namely, IBM, BSC, and FIU) interoperate using the communication protocols. Keywords: meta-scheduler, resource broker, interoperable scheduling protocol. [28] S. Masoud Sadjadi, Shu Shimizu, Javier Figueroa, Raju Rangaswami, Javier Delgado, Hector Duran, and Xabriel Collazo. A modeling approach for estimating execution time of long-running scientific applications. In Proceedings of the 22nd IEEE International Parallel & Distributed Processing Symposium (IPDPS-2008), the Fifth High-Performance Grid Computing Workshop (HPGC-2008), pages 1-8, Miami, Florida, April 2008. [ bib | www | .pdf ] In a Grid computing environment, resources are shared among a large number of applications. Brokers and schedulers find matching resources and schedule the execution of the applications by monitoring dynamic resource availability and employing policies such as first-come- first-served and back-filling. To support applications with timeliness requirements in such an environment, brokering and scheduling algorithms must address an additional problem - they must be able to estimate the execution time of the application on the currently available resources. In this paper, we present a modeling approach to estimating the execution time of long-running scientific applications. The modeling approach we propose is generic; models can be constructed by merely observing the application execution “externally” without using intrusive techniques such as code inspection or instrumentation. The model is cross-platform; it enables prediction without the need for the application to be profiled first on the target hardware. To show the feasibility and effectiveness of this approach, we developed a resource usage model that estimates the execution time of a weather forecasting application in a multi-cluster Grid computing environment. We validated the model through extensive benchmarking and profiling experiments and observed prediction errors that were within 10experience, we believe that our approach can be used to model the execution time of other time-sensitive scientific applications; thereby, enabling the development of more intelligent brokering and scheduling algorithms. Keywords: High-Performance Computing, Profiling, Behavior Modeling, Weather Research and Forecasting. [29] S. Masoud Sadjadi, Liana Fong, Rosa M. Badia, Javier Figueroa, Javier Delgado, Xabriel J. Collazo-Mojica, Khalid Saleem, Raju Rangaswami, Shu Shimizu, Hector A. Duran Limon, Pat Welsh, Sandeep Pattnaik, Anthony Praino, David Villegas, Selim Kalayci, Gargi Dasgupta, Onyeka Ezenwoye, Juan Carlos Martinez, Ivan Rodero, Shuyi Chen, Javier Muñoz, Diego Lopez, Julita Corbalan, Hugh Willoughby, Michael McFail, Christine Lisetti, and Malek Adjouadi. Transparent grid enablement of weather research and forecasting. In Proceedings of the 15th ACM Mardi Gras conference: From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities, Baton Rouge, Louisiana, USA, January 2008. (8 pages). [ bib | www | .pdf ] The impact of hurricanes is so devastating throughout different levels of society that there is a pressing need to provide a range of users with accurate and timely information that can enable effective planning for and response to potential hurricane landfalls. The Weather Research and Forecasting (WRF) code is the latest numerical model that has been adopted by meteorological services worldwide. The current version of WRF has not been designed to scale out of a single organization's local computing resources. However, the high resource requirements of WRF for fine-resolution and ensemble forecasting demand a large number of computing nodes, which typically cannot be found within one organization. Therefore, there is a pressing need for the Grid-enablement of the WRF code such that it can utilize resources available in partner organizations. In this paper, we present our research on Grid enablement of WRF by leveraging our work in transparent shaping, GRID superscalar, profiling, code inspection, code modeling, meta-scheduling, and job flow management. Keywords: Grid Enablement, Scientific Applications, WRF, Portal, Meta-Scheduling, Job Flow Management, Modeling, and Profiling. [30] S. Masoud Sadjadi, Selim Kalayci, and Yi Deng. A self-configuring communication virtual machine. In Proceedings of the 2008 IEEE International Conference on Networking, Sensing and Control (ICNSC-08), pages 739-744, Sanya, China, April 2008. [ bib | .pdf ] Today’s communication-based applications are mostly crafted in a stovepipe development paradigm, which is inflexible to be used by various domain-specific applications and costly in the development phase. In a previous paper [1], we proposed a new design called CVM (Communication Virtual Machine) to overcome these problems by having a high-level API which can be reused and extended easily for user-centric applications in any domain. Within CVM framework, we came across a practical issue, which is actually the case for any end-to-end multimedia communication, namely the NAT-traversal (network address translation) problem that limits the reliability and availability of CVM and variants of CVM. In this paper, we explain about the necessity of self-configuration for the NAT-traversal problem in end-to-end communications, and propose a solution within the core CVM framework. Keywords: Communication Virtual Machine, CVM, Sele-Configuration, NAT-Resolution. [31] Xing Hang, David Villegas Castillo, S. Masoud Sadjadi, and Heidi Alvarez. Formative assessment of the effectiveness of collaboration in GCB. In Proceedings of the International Conference on Information Society (i-Society 2007), pages 103-110, Merrillville, Indiana, USA, October 2007. [ bib | .pdf ] With the rapid emergence of new communication software and hardware tools and the improvement of telecommunication infrastructures, a new collaboration paradigm is on the horizon that allows researchers around the globe to expand their loop of collaborators to cross geographical and cultural boundaries. However, much needs to be learned from the user experiences not only to improve the quality of the collaboration facilities, but also to develop new social protocols for distributed human interactions. In this paper, we try to analyze the usage of cyberinfrastructure in remote collaboration among researchers. For that, we draw on survey data and interviews with members from different collaborative projects, and we analyze how our current communication tools meet the needs of collaborative research activities. Then, we articulate a series of key challenges and requirements that contemporary teams are facing. In the end, we present ideas on what sorts of collaborative tools need to be built in order to fulfil the distributed and interdisciplinary collaboration projects. Our findings shed light on the factors that drive the use of cyberinfrastructure and the effectiveness in the success of cross-national and interdisciplinary research collaboration and distance learning, and suggest further research topics. Keywords: e-Science, formative assessment, group collaboration, distributed collaboration, distance learning. [32] Onyeka Ezenwoye, S. Masoud Sadjadi, Ariel Carey, and Michael Robinson. Grid service composition in BPEL for scientific applications. In Proceedings of the International Conference on Grid computing, high-performAnce and Distributed Applications (GADA'07), pages 1304-1312, Vilamoura, Algarve, Portugal, November 2007. [ bib | .pdf ] Grid computing aims to create an accessible virtual supercomputer by integrating distributed computers to form a parallel infrastructure for processing applications. To enable service-oriented Grid computing, the Grid computing architecture was aligned with the current Web service technologies; thereby, making it possible for Grid applications to be exposed as Web services. The WSRF set of specifications standardized the association of state information withWeb services (WSResource) while providing interfaces for the management of state data. Key to the realization of the benefits of Grid computing is the ability to integrate WS-Resources to create higher-level applications. The Business Process Execution Language (BPEL) is the leading standard for integrating Web services and as such has a natural affinity to the integration of Grid services. In this paper, we share our experience on using BPEL to integrate, create, and manage WS-Resources that implement the factory pattern. We use a Bioinformatics application as a case study to show how BPEL can be used to orchestrate Grid services. The execution environment for our case study comprises the Globus Toolkit as the Grid middleware and the ActiveBPEL as the BPEL engine. To the best of our knowledge, this work is among the handful approaches that successfully use BPEL for orchestrating WSRF-based services and the only one that includes the discovery and management of instances. Keywords: BPEL, Grid Computing,WSRF, OGSA-DAI, Service Composition. [33] I. Rodero, J. Corbalan F. Guim, L. L. Fong, Y. G. Liu, and S. Masoud Sadjadi. Looking for an evolution of grid scheduling: Meta-brokering. In Proceedings of the Second CoreGRID Workshop on Middleware at ISC2007 (CoreGRID-2007), pages 105-119, Dresden, Germany, June 2007. [ bib | .pdf ] A Grid Resource Broker, or also called meta-scheduler, is a component used for matching work to available Grid resources. The Grid resources usually have a local resource management system with a particular scheduler belonging to different IT centers or institutions. These centers or institutions may have different policies or requirements on how the resources should be used. This situation causes two main problems: the user uniform access to the Grid is lost, and the scheduling decisions are taken separately while they should be done in coordination. These problems have been observed in different efforts such as the HPC-Europa project but it is still an open problem. In this paper we discuss how to achieve a new approach in global brokering with new scheduling techniques through meta-brokering. As the result of the discussion on requirements for meta-brokering, we propose a design in two different contexts: as an extension of HPC-Europa on top of different meta-schedulers, and as a distributed model for the LA Grid meta-brokering project. [34] S. Masoud Sadjadi, J. Martinez, T. Soldo, L. Atencio, R. M. Badia, and J. Ejarque. Improving separation of concerns in the development of scientific applications. In Proceedings of The Nineteenth International Conference on Software Engineering and Knowledge Engineering (SEKE'2007), pages 456-461, Boston, USA, July 2007. [ bib | .pdf ] High performance computing (HPC) is gaining popularity in solving scientific applications. Using the current programming standards, however, it takes an HPC expert to efficiently take advantage of HPC facilities; a skill that a scientist does not necessarily have. This lack of separation of concerns has resulted in scientific applications with rigid code, which entangles non-functional concerns (i.e., the parallel code) into functional concerns (i.e., the core business logic). Effectively, this tangled code hinders the maintenance and evolution of these applications. In this paper, we introduce Transparent Grid Enabler (TGE) that separates the task of developing the business logic of a scientific application from the task of improving its performance. TGE achieves this goal by integrating two existing software tools, namely, TRAP/J and GRID superscalar. A simple matrix multiplication program is used as a case study to demonstrate the current use and capabilities of TGE. [35] S. Masoud Sadjadi and Fernando Trigoso. TRAP.NET: A realization of transparent shaping in .NET. In Proceedings of The Nineteenth International Conference on Software Engineering and Knowledge Engineering (SEKE'2007), pages 19-24, Boston, USA, July 2007. [ bib | .pdf ] We define adaptability as the capacity of software in ad-justing its behavior in response to changing conditions. To list just a few examples, adaptability is important in pervasive computing, where software in mobile devices need to adapt to dynamic changes in wireless networks; autonomic computing, where software in critical systems are required to be self-manageable; and grid computing, where software for long running scientific applications need to be resilient to hardware crashes and network out-ages. In this paper, we provide a realization of the trans-parent shaping programming model, called TRAP.NET, which enables transparent adaptation in existing .NET applications as a response to the changes in the applica-tion requirements and/or to the changes in their execution environment. Using TRAP.NET, we can adapt an applica-tion dynamically, at run time, or statically, at load time, without the need to manually modify the application original functionality-hence transparent. [36] Heidi L. Alvarez, David Chatfield, Donald A. Cox, Eric Crumpler, Cassian D’Cunha, Ronald Gutierrez, Julio Ibarra, Eric Johnson, Kuldeep Kumar, Tom Milledge, Giri Narasimhan, Rajamani S. Narayanan, Alejandro de la Puente, S. Masoud Sadjadi, and Chi Zhang. Cyberbridges: A model collaboration infrastructure for e-Science. In Proceedings of the 7th IEEE International Symposium on Cluster Computing and the Grid (CCGrid'07), pages 65-72, Rio de Janeiro, Brazil, May 2007. (acceptance rate 33.5%). [ bib | .pdf ] The 'CyberBridges' pilot project is an innovative model for creating a new generation of scientists and engineers who are capable of fully integrating cyberinfrastructure into the whole educational, professional, and creative process of their respective disciplines. CyberBridges augments graduate student education to include a foundation of understanding in Advanced Networking and Grid Infrastructure for High Performance Computing, and bridges the divide between the information technology community and diverse science and engineering disciplines. CyberBridges is increasing the rate of discovery for science and engineering faculty by empowering them with cyberinfrastructure, fostering inter-disciplinary research collaboration, improving minority graduate education, and institutionalizing this change process. We demonstrate the effectiveness of CyberBridges by providing four case studies with graduate students of Physics, Bioinformatics, Chemistry, and Biomedical Engineering. Groundwork has begun to extend the outreach of CyberBridges for international research and education collaborations. [37] Raju Rangaswami, S. Masoud Sadjadi, Nagarajan Prabakar, and Yi Deng. Automatic generation of user-centric multimedia communication services. In Proceedings of the 26th IEEE International Performance Computing and Communications Conference (IPCCC), pages 324-331, New Orleans, Louisiana, USA, April 2007. [ bib | .pdf ] Multimedia communication services today are conceived, designed, and developed in isolation, following a stovepipe approach. This has resulted in a fragmented and incompatible set of technologies and products. Building new communication services requires a lengthy and costly development cycle, which severely limits the pace of innovation. In this paper, we address the fundamental problem of automating the development of multimedia communication services. We propose a new paradigm for creating such services through declarative specification and generation, rather than through traditional design and development. Further, the proposed paradigm pays special attention to how the end-user specifies his/her communication needs, an important requirement largely ignored in existing approaches. We argue that for the domain of user-centric multimedia communication services, the proposed approach of automatic generation is not only feasible in terms of the ability to meet a range of communication needs in several domains, but is also desirable for maintaining and improving the pace of innovation in multimedia communication services. [38] Onyeka Ezenwoye and S. Masoud Sadjadi. TRAP/BPEL: A framework for dynamic adaptation of composite services. In Proceedings of the International Conference on Web Information Systems and Technologies (WEBIST 2007), Barcelona, Spain, March 2007. (17 pages.). [ bib | .pdf ] TRAP/BPEL is a framework that adds autonomic behavior into existing BPEL processes automatically and transparently. We define an autonomic BPEL process as a composite Web service that is capable of responding to changes in its execution environment (e.g., a failure in a partner Web service). Unlike other approaches, TRAP/BPEL does not require any manual modifications to the original code of the BPEL processes and there is no need to extend the BPEL language nor its BPEL engine. In this paper, we describe the details of the TRAP/BPEL framework and use a case study to demonstrate the feasibility and effectiveness of our approach. Keywords: TRAP/BPEL, generic proxy, self-management, dynamic service discovery. [39] Onyeka Ezenwoye and S. Masoud Sadjadi. RobustBPEL2: Transparent autonomization in business processes through dynamic proxies. In Proceedings of the 8th IEEE International Symposium on Autonomous Decentralized Systems (ISADS 2007), pages 17-24, Sedona, Arizona, March 2007. [ bib | .pdf ] Web services paradigm is allowing applications to interact with one another over the Internet. BPEL facilitates this interaction by providing a platform through which Web services can be integrated. However, the autonomous and distributed nature of the integrated services presents unique challenges to the reliability of composed services. The focus of our ongoing research is to transparently introduce autonomic behavior to BPEL processes in order to make them more resilient to the failure of partner services. In this work, we present an approach where BPEL processes are adapted by redirecting their interactions with partner services to a dynamic proxy. We describe the generative adaptation process and the architecture of the adaptive BPEL processes and their corresponding proxies. Finally, we use case studies to demonstrate how generated dynamic proxies are used to support self-healing and self-optimization in instrumented BPEL processes. Keywords: RobustBPEL2, dynamic proxy, self-management, dynamic service discovery. [40] Chi Zhang, S. Masoud Sadjadi, Weixiang Sun, Raju Rangaswami, and Yi Deng. A user-centric network communication broker for multimedia collaborative computing. In Proceedings of the Second IEEE International Conference on Collaborative Computing (CollaborateCom 2006), pages 1-5, Atlanta, Georgia, USA, November 2006. [ bib | .pdf ] The development of collaborative multimedia applications today follows a vertical development approach, which is a major inhibitor that drives up the cost of development and slows down the pace of innovation of new generations of collaborative applications. In this paper, we propose a network communication broker (NCB) that provides a unified higher-level abstraction that encapsulates the complexity of network-level communication control and media delivery for the class of multimedia collaborative applications. NCB expedites the development of next-generation applications with various communication logics. Furthermore, NCB-based applications can be easily ported to new network environments. In addition, the self-managing design of NCB supports dynamic adaptation in response to changes in network conditions and user requirements. Keywords: Network communication broker, multimedia, middleware. [41] Yi Deng, S. Masoud Sadjadi, Peter J. Clarke, Chi Zhang, Vagelis Hristidis, Raju Rangaswami, and Nagarajan Prabakar. A communication virtual machine. In Proceedings of the 30th Annual International Computer Software and Applications Conference (COMPSAC 2006), pages 521-531, Chicago, U.S.A., September 2006. [ bib | .pdf ] The convergence of data, voice and multimedia communication over digital networks, coupled with continuous improvement in network capacity and reliability has significantly enriched the ways we communicate. However, the stovepipe approach used to develop today’s communication applications and tools results in rigid technology, limited utility, lengthy and costly development cycle, difficulty in integration, and hinders innovation. In this paper, we present a fundamentally different approach, which we call Communication Virtual Machine (CVM) to address these problems. CVM provides a user-centric, modeldriven approach for conceiving, synthesizing and delivering communication solutions across application domains. We argue that CVM represents a far more effective paradigm for engineering communication solutions. The concept, architecture, modeling language, prototypical design and implementation of CVM are discussed. Keywords: Model driven, communication application, multimedia, middleware, telemedicine. [42] Farshad A. Samimi, Philip K. McKinley, and S. Masoud Sadjadi. Mobile Service Clouds: A self-managing infrastructure for autonomic mobile computing services. In Proceedings of the Second International Workshop on Self-Managed Networks, Systems & Services (SelfMan 2006, LNCS 3996), volume 3996 of Lecture Notes in Computer Science (LNCS), pages 130-141, Dublin, Ireland, June 2006. Springer-Verlag. [ bib | .pdf ] We recently introduced Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of autonomic communication services. In this paper, we propose a model that extends Service Clouds to the wireless edge of the Internet. This model, called Mobile Service Clouds, enables dynamic instantiation, composition, configuration, and reconfiguration of services on an overlay network to support self-management in mobile computing. We have implemented a prototype of this model and applied it to the problem of dynamically instantiating and migrating proxy services for mobile hosts. We conducted a case study involving data streaming across a combination of Planet- Lab nodes, local proxies, and wireless hosts. Results are presented demonstrating the effectiveness of the prototype in establishing new proxies and migrating their functionality in response to node failures. Keywords: autonomic networking, distributed service composition, self-managing system, overlay network, mobile computing, quality of service. [43] Onyeka Ezenwoye and S. Masoud Sadjadi. Enabling robustness in existing BPEL processes. In Proceedings of the 8th International Conference on Enterprise Information Systems, Paphos, Cyprus, May 2006. (8 pages). [ bib | .pdf ] Web services are increasingly being used to expose applications over the Internet. To promote efficiency and the reuse of software, these Web services are being integrated both within enterprises and across enterprises, creating higher function services. BPEL is a workflow language that can be used to facilitate this integration. Unfortunately, the autonomous nature of Web services leaves BPEL processes susceptible to the failures of their constituent services. In this paper, we present a systematic approach to making existing BPEL processes more fault tolerant by monitoring the involved Web services at runtime, and by replacing delinquent Web services. To show the feasibility of our approach, we developed a prototype implementation that generates more robust BPEL processes from existing ones automatically. The use of the prototype is demonstrated using an existing Loan Approval BPEL process. Keywords: ECommerce, Web Service Monitoring, Robust BPEL Processes. [44] Onyeka Ezenwoye and S. Masoud Sadjadi. Composing aggregate web services in BPEL. In Proceedings of the 44th ACM Southeast Conference (ACMSE 2006), pages 458-463, Melbourne, Florida, March 2006. [ bib | .pdf ] Web services are increasingly being used to expose applications over the Internet. These Web services are being integrated within and across enterprises to create higher function services. BPEL is a workflow language that facilitates this integration. Although both academia and industry acknowledge the need for workflow languages, there are few technical papers focused on BPEL. In this paper, we provide an overview of BPEL and discuss its promises, limitations and challenges. Keywords: Web services, workflow language, BPEL, business processes, A2A integration, and B2B integration. [45] S. Masoud Sadjadi and P. K. McKinley. Using transparent shaping and web services to support self-management of composite systems. In Proceedings of the International Conference on Autonomic Computing (ICAC'05), pages 76-87, Seattle, Washington, June 2005. [ bib | .pdf ] Increasingly, software systems are constructed by composing multiple existing applications. The resulting complexity increases the need for self-management of the system. However, adding autonomic behavior to composite systems is difficult, especially when the existing components were not originally designed to support such interactions. Moreover, entangling the code for integrated selfmanagement with the code for the business logic of the original applications may actually increase the complexity of the system, counter to the desired goal. In this paper, we propose a technique to enable self-managing behavior to be added to composite systems transparently, that is, without requiring manual modifications to the existing code. The technique uses transparent shaping, developed previously to enable dynamic adaptation in existing programs, to weave self-managing behavior into existing applications, which interact through Web services. A case study demonstrates the use of this technique to construct a fault-tolerant surveillance application from two existing applications, one developed in .NET and the other in CORBA, without the need to modify the source code of the original applications. Keywords: application integration, adaptive middleware, autonomic computing, self-configuration, fault-tolerance, dynamic adaptation, transparent adaptation. [46] S. Masoud Sadjadi, Philip K. McKinley, and Betty H.C. Cheng. Transparent shaping of existing software to support pervasive and autonomic computing. In Proceedings of the first Workshop on the Design and Evolution of Autonomic Application Software 2005 (DEAS'05), in conjunction with ICSE 2005, pages 1-7, St. Louis, Missouri, May 2005. [ bib | .pdf ] The need for adaptability in software is growing, driven in part by the emergence of pervasive and autonomic computing. In many cases, it is desirable to enhance existing programs with adaptive behavior, enabling them to execute effectively in dynamic environments. In this paper, we propose a general programming model called transparent shaping to enable dynamic adaptation in existing programs. We describe an approach to implementing transparent shaping that combines four key software development techniques: aspect-oriented programming to realize separation of concerns at development time, behavioral reflection to support software recon- figuration at run time, component-based design to facilitate independent development and deployment of adaptive code, and adaptive middleware to encapsulate the adaptive functionality. After presenting the general model, we discuss two specific realizations of transparent shaping that we have developed and used to create adaptable applications from existing programs. [47] Shakil Siddique, Raimund K. Ege, and S. Masoud Sadjadi. X-communicator: Implementing an advanced adaptive sip-based user agent for multimedia communication. In Proceedings of the SouthEastCon 2005, pages 271 - 276, 2005. [ bib | .pdf ] [48] Farshad A. Samimi, Philip K. McKinley, S. Masoud Sadjadi, and Peng Ge. Kernel­ middleware interaction to support adaptation in pervasive computing environments. In Proceedings of the Second International Workshop on Middleware for Pervasive and Ad-Hoc Computing, a Companion Proceedings of the fifth International Middleware Conference (Middleware'04), pages 140-145, Toronto, Ontario, Canada, October 2004. [ bib | .pdf ] In pervasive computing environments, conditions are highly variable and resources are limited. In order to meet the needs of applications, systems must adapt dynamically to changing situations. Since adaptation at one system layer may be insufficient, crosslayer, or vertical approaches to adaptation may be needed. Moreover, adaptation in distributed systems often requires horizontal cooperation among hosts. This cooperation is not restricted to the source and destination(s) of a data stream, but might also include intermediate hosts in an overlay network or mobile ad hoc network. We refer to this combined capability as universal adaptation. We contend that the model defining interaction between adaptive middleware and the operating system is critical to realizing universal adaptation. We explore this hypothesis by evaluating the Kernel-Middleware eXchange (KMX), a specific model for crosslayer, cross-system adaptation. We present the KMX architecture and discuss its potential role in supporting universal adaptation in pervasive computing environments. We then describe a prototype implementation of KMX and show results of an experimental case study in which KMX is used to improve the quality of video streaming to mobile nodes in a hybrid wired-wireless network. Keywords: Adaptive middleware, pervasive computing, cross-layer adaptation, universal adaptation, multimedia communication, quality of service, video streaming, wireless network. [49] S. Masoud Sadjadi, Philip K. McKinley, Betty H.C. Cheng, and R.E. Kurt Stirewalt. TRAP/J: Transparent generation of adaptable Java programs. In Proceedings of the International Symposium on Distributed Objects and Applications (DOA'04), volume 3291, pages 1243-1261, Agia Napa, Cyprus, October 2004. [ bib | .pdf ] This paper describes TRAP/J, a software tool that enables new adaptable behavior to be added to existing Java applications transparently (that is, without modifying the application source code and without extending the JVM). The generation process combines behavioral reflection and aspect-oriented programming to achieve this goal. Specifically, TRAP/J enables the developer to select, at compile time, a subset of classes in the existing program that are to be adaptable at run time. TRAP/J then generates specific aspects and reflective classes associated with the selected classes, producing an adapt-ready program. As the program executes, new behavior can be introduced via interfaces to the adaptable classes. A case study is presented in which TRAP/J is used to introduce adaptive behavior to an existing audio-streaming application, enabling it to operate effectively in a lossy wireless network by detecting and responding to changing network conditions. Keywords: generator framework, transparent adaptation, dynamic reconfiguration, aspect-oriented programming, behavioral reflection, middleware, mobile computing, quality-of-service. [50] Z. Zhou, P. K. McKinley, and S. M. Sadjadi. On quality-of-service and energy consumption tradeoffs in fec-enabled audio streaming. In Proceedings of the 12th IEEE International Workshop on Quality of Service (IWQoS 2004), pages 161-170, Montreal, Canada, June 2004. Winner of the IWQoS 2004 best student paper award. (acceptance rate 16.23% or 25/154). [ bib | .pdf ] This paper addresses the energy consumption of forward error correction (FEC) protocols as used to improve quality-of-service (QoS) for wireless computing devices. The paper also characterizes the effect on energy consumption and QoS of the power saving mode in 802.11 wireless local area networks (WLANs). Experiments are described in which FEC-encoded audio streams are multicast to mobile computers across a WLAN. Results of these experiments quantify the tradeoffs between improved QoS, due to FEC, and additional energy consumption caused by receiving and decoding redundant packets. Two different approaches to FEC are compared relative to these metrics. The results of this study enable the development of adaptive software mechanisms that attempt to manage these tradeoffs in the presence of highly dynamic wireless environments. Keywords: energy consumption, quality-of-service, forward error correction, mobile computing, handheld computer, adaptive middleware. [51] S. M. Sadjadi and P. K. McKinley. Transparent self-optimization in existing CORBA applications. In Proceedings of the International Conference on Autonomic Computing (ICAC-04), pages 88-95, New York, NY, May 2004. [ bib | .pdf ] This paper addresses the design of adaptive middleware to support autonomic computing in pervasive computing environments. The particular problem we address here is how to support self-optimization to changing network connection capabilities as a mobile user interacts with heterogeneous elements in a wireless network infrastructure. The goal is to enable self-optimization to such changes transparently with respect to the core application code. We propose a solution based on the use of the generic proxy, which is a specific CORBA object that can intercept and process any CORBA request using rules and actions that can be introduced to the knowledge base of the proxy during execution. To explore its design and operation, we have incorporated the generic proxy into ACT [1], an adaptive middleware framework we designed previously to support adaptation in CORBA applications. Details of the generic proxy are presented, followed by results of a case study enabling self-optimization for an existing surveillance application in a heterogeneous wireless environment. Keywords: adaptive middleware, autonomic computing, self-optimization, dynamic adaptation, transparent adaptation, generic proxy, quality-of-service, mobile computing, CORBA. [52] S. M. Sadjadi, P. K. McKinley, R. E. K. Stirewalt, and B. H.C. Cheng. Generation of self-optimizing wireless network applications. In Proceedings of the International Conference on Autonomic Computing (ICAC-04), pages 310-311, New York, NY, May 2004. [ bib | .pdf ] This paper introduces TRAP/J, a software tool that enables autonomic computing in existing Java programs by generating adapt-ready versions of the original programs at compile time. The generation process is transparent to the original program source code, in which there is no need to modify the source code manually. At run time, new behavior can be introduced to the adapt-ready programs. To reduce overhead, TRAP/J enables the developer to select, at compile time, a subset of classes, constituting an existing program, to be adaptive at run time. To support dynamic adaptation in existing Java programs, TRAP/J benefits from aspect-oriented programming and behavioral reflection. TRAP/J generate specific aspects and reflective classes associated with the selected classes. A case study is presented in which TRAP/J was used to enable an existing audio-streaming application to perform self-optimization in a wireless network environment by adapting to changing conditions automatically. Keywords: autonomic computing, adapt-ready programs, transparent adaptation, aspect-oriented programming, behavioral reflection, middleware, quality-of-service, mobile computing. [53] S. M. Sadjadi and P. K. McKinley. ACT: An adaptive CORBA template to support unanticipated adaptation. In Proceedings of the 24th IEEE International Conference on Distributed Computing Systems (ICDCS'04), pages 74-83, Tokyo, Japan, March 2004. (acceptance rate 17.7%). [ bib | .pdf ] This paper proposes an Adaptive CORBA Template (ACT), which enables run-time improvements to CORBA applications in response to unanticipated changes in either their functional requirements or their execution environments. ACT enhances CORBA applications by weaving adaptive code into the applications' object request brokers (ORBs) at run time. The woven code intercepts and adapt the requests, replies, and exceptions that pass through the ORBs. ACT itself is language- and ORB-independent. Specifically, ACT can be used to develop an object-oriented framework in any language that supports dynamic loading of code and can be applied to any CORBA ORB that supports portable interceptors. Moreover, ACT can be integrated with other adaptive CORBA frameworks and can be used to support interoperation among otherwise incompatible frameworks. To evaluate the performance and functionality of ACT, we implemented a prototype in Java to support unanticipated adaptation in non-functional concerns, such as quality-of-service and system-resource management. Our experimental results show that the overhead introduced by the ACT infrastructure is negligible, while the adaptations offered are highly flexible. Keywords: middleware, CORBA, dynamic adaptation, interoperability, request interceptor, dynamic weaving, proxy, quality-of-service, mobile computing [54] S. M. Sadjadi, P. K. McKinley, and E. P. Kasten. Architecture and operation of an adaptable communication substrate. In Proceedings of the Ninth IEEE International Workshop on Future Trends of Distributed Computing Systems (FTDCS'03), pages 46-55, San Juan, Puerto Rico, May 2003. [ bib | .pdf ] This paper describes the internal architecture and operation of an adaptable communication component called the MetaSocket. MetaSockets are created using Adaptive Java, a reflective extension to Java that enables a component's internal architecture and behavior to be adapted at run time in response to external stimuli. This paper describes how adaptive behavior is implemented in MetaSockets, as well as how MetaSockets interact with other adaptive components, such as decision makers and event mediators. Results of experiments on a mobile computing testbed demonstrate how MetaSockets respond to dynamic wireless channel conditions in order to improve the quality of interactive audio streams delivered to iPAQ handheld computers. [55] Philip K. McKinley, S. M. Sadjadi, E. P. Kasten, and R. Kalaskar. Programming language support for adaptive wearable computing. In Proceedings of International Symposium on Wearable Computers (ISWC'02), pages 205-212, Seattle, Washington, October 2002. [ bib | .pdf ] This paper investigates the use of programming language constructs to realize adaptive behavior in support of collaboration among users of wearable and handheld computers. A prototype language, Adaptive Java, contains primitives that permit programs to modify their own operation in a principled manner. In a case study, Adaptive Java was used to construct MetaSocket components, whose composition and behavior can be adapted to changing conditions during execution. MetaSockets were then integrated into Pavilion, a web-based collaboration framework, and experiments were conducted on a mobile computing testbed containing wearable, handheld, and laptop computer systems. Performance results demonstrate the utility of MetaSockets to improving the quality of interactive audio streams and reliable data transfers among collaborating users. Keywords: adaptive middleware, reflection, wearable computing, mobile computing, wireless networks, forward error correction. [56] P. K. McKinley, S. M. Sadjadi, and E. P. Kasten. An adaptive software approach to intrusion detection and response. In Proceedings of The 10th International Conference on Telecommunication Systems, Modeling and Analysis (ICTSM10), pages 91-99, Monterey, California, October 2002. [ bib | .pdf ] This paper proposes the use of programming language constructs to sup­port adaptive self­monitoring and self­reporting software. The methods are particularly well­ suited to wireless mobile devices, where limited resources may constrain the use of certain software audits. An adaptive software ar­chitecture is described that supports run­time transformations on software components, enabling them to report internal details on how they are be­ing used to other parts of the system. Effectively, any component of the system can be turned into an informer'' at run time, and the nature of the reported information can be adapted dynamically based on changing conditions or directives from another authority, such as an intrusion detec­ tion system. A prototype implementation is described. The operation of the system is demonstrated through an experiment in which it detects and responds to a malicious host that multicasts noise'' packets to a wireless iPAQ handheld computer. [57] P. K. McKinley, E. P. Kasten, S. M. Sadjadi, and Z. Zhou. Realizing multi-dimensional software adaptation. In Proceedings of the ACM Workshop on Self-Healing, Adaptive and self-MANaged Systems (SHAMAN), held in conjunction with the 16th Annual ACM International Conference on Supercomputing, New York City, NY, June 2002. (8 pages). [ bib | .pdf ] This paper describes the use of programming language constructs to support run-time software adaptation. A prototype language, Adaptive Java, contains primitives that permit programs to modify their own operation in a principled manner. In case studies, Adaptive Java is being used to support adaptation for different crosscutting concerns associated with heterogeneous mobile computing and critical infrastructure protection. Examples are described in which Adaptive Java components support dynamic quality-ofservice on wireless networks, run-time energy management for handheld computers, and self-auditing of potential security threats in distributed environments. [58] Z. Yang, B. H.C. Cheng, R. E. K. Stirewalt, J. Sowell, S. M. Sadjadi, and P. K. McKinley. An aspect-oriented approach to dynamic adaptation. In Proceedings of the ACM SIGSOFT Workshop On Self-healing Software (WOSS'02), pages 85-92, November 2002. [ bib | .pdf ] This paper presents an aspect-oriented approach to dynamic adaptation. A systematic process for defining where, when, and how an adaptation is to be incorporated into an application is presented. Specifically, the paper presents a two-phase approach to dynamic adaptation, where the first phase prepares a non-adaptice program for adaptation, and the second phase implements the adaptation at run time. this approach is illustrated with a distributed conferencing application. [59] E. P. Kasten, P. K. McKinley, S. M. Sadjadi, and R. E. K. Stirewalt. Separating introspection and intercession in metamorphic distributed systems. In Proceedings of the IEEE Workshop on Aspect-Oriented Programming for Distributed Computing (with ICDCS'02), pages 465-472, Vienna, Austria, July 2002. [ bib | .pdf ] Many middleware platforms use computational reflection to support adaptive functionality. Most approaches intertwine the activity of observing behavior (introspection) with the activity of changing behavior (intercession). This paper explores the use of language constructs to separate these parts of reflective functionality. This separation and “packaging” of reflective primitives is intended to facilitate the design of correct and consistent adaptive middleware. A prototype implementation is described in which this functionality is realized through extensions to the Java programming language. A case study is described in which “metamorphic” socket components are created from regular socket classes and used to realize adaptive behavior on wireless network connections. Keywords: adaptive middleware, reflection, component design, mobile computing, wireless networks, forward error correction.
Poster Summaries

# Poster Summaries

 [1] Ivan Rodero, Francec Guima, Julita Corbalan, Liana Fong, and S. Masoud Sadjadi. Evaluation of broker selection strategies. Technical Report UPC-DAC-RR-CAP-2008-41, Computer Architecture Department, Technical University of Catalonia, Barcelona, Spain, Dec. 2008. [ bib ] The increasing demand for resources of the high performance computing systems has led to new forms of collaboration of distributed systems such as interoperable grid systems that contain and manage their own resources. While with a single domain one of the most important tasks is the selection of the most appropriate set of resources to dispatch a job, in an interoperable grid environment this problem shifts to selecting the most appropriate domain containing the requiring resources for the job. In the Latin American Grid initiative, our model consists of multiple domains. Each domain has its domain broker, and the task of scheduling on top of brokers can be called metabrokering or broker selection. In this paper, we present and evaluate the “bestBrokerRank” broker selection policy and its two different variants. The first one uses the resource information in aggregated forms as input, and the second one also uses the brokers average bounded slowdown as a dynamic performance metric. From our evaluations performed with simulation tools, we state that the proposed resource aggregation algorithms are scalable for an interoperable grid environment and we show that the best performance results are obtained with our coordinated policy. We conclude that delegating part of the scheduling responsibilities to the underlying scheduling layers is a good way to balance the performance among the different brokers and schedulers. Keywords: Grid Computing, Scheduling Strategies, Interoperability. [2] Ivan Rodero, Francec Guima, Julita Corbalan, Liana Fong, and S. Masoud Sadjadi. Interoperable grid scheduling strategies. Technical Report FIU-SCIS-2008-12-02, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, Dec. 2008. [ bib ] The increasing demand for resources of the high performance computing systems has led to new forms of collaboration of distributed systems such as interoperable grid systems that contain and manage their own resources. While with a single domain one of the most important tasks is the selection of the most appropriate set of resources to dispatch a job, in an interoperable grid environment this problem shifts to selecting the most appropriate domain containing the requiring resources for the job. In the Latin American Grid initiative, our model consists of multiple domains. Each domain has its domain broker, and the task of scheduling on top of brokers can be called metabrokering or broker selection. In this paper, we present and evaluate the “bestBrokerRank” broker selection policy and its two different variants. The first one uses the resource information in aggregated forms as input, and the second one also uses the brokers average bounded slowdown as a dynamic performance metric. From our evaluations performed with simulation tools, we state that the proposed resource aggregation algorithms are scalable for an interoperable grid environment and we show that the best performance results are obtained with our coordinated policy. We conclude that delegating part of the scheduling responsibilities to the underlying scheduling layers is a good way to balance the performance among the different brokers and schedulers. Keywords: Grid Computing, Scheduling Strategies, Interoperability. [3] Javier Ocasio Pérez, Pedro I. Rivera-Vega, S. Masoud Sadjadi, and Fernando Trigoso. Dynamic adaptation of a math service using trap.net. Technical Report FIU-SCIS-2008-07-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, July 2008. [ bib | .pdf ] This technical report details a case study done for the TRAP.NET project (which stands for Transparent Reflective Aspect Programming in Microsoft’s .NET Framework). TRAP.NET provides dynamic adaptation for software programs written in .NET. Keywords: TRAP.NET, Dynamic Adaptation, and Math Service. [4] S. Masoud Sadjadi, David Villegas, Javier Munoz, Diego Lopez, Alex Orta, Michael McFail, Xabriel J. Collazo-Mojica, and Javier Figueroa. Finding an appropriate profiler for the weather research and forecasting code. Technical Report FIU-SCIS-2007-09-03, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, August 2007. [ bib | .pdf ] This evaluation of profiling tools started because of a need to examine the behavior of the Weather Research and Forecasting Model (WRF). It is currently written to run on a single cluster; our team wished to explore the options for scaling out WRF to a grid environment. This necessitated understanding how WRF works and what its resource usage patterns look like. In order to do this we required a profiler, which led us to creating this document. We began with a long list of tools, but an in depth investigation for each of them would have been both ineffective and unwarranted. Instead, we broke our evaluation of the field into three passes. In the first pass we discarded programs that did not meet our basic criteria, things like architecture and language support. The second pass was more qualitative. We came up with a list of pros and cons for each tool, and rejected those that did not have features we wanted. Those tools that were still being considered were then given an extensive trial to determine how well they worked for us. We examined things like documentation, ease of installation, whether the tool provided source code correlation and call graphs, etc. It is our hope that this information proves useful to the community, allowing researchers and professionals to learn from our experiences. Keywords: Profiler Evaluation, Weather Forecast and Reserach, WRF, Fortran, MPI, OpenMP, Grid Computing. [5] S. Masoud Sadjadi, Javier Munoz, Diego Lopez, David Villegas, Javier Figueroa, Xabriel J. Collazo-Mojica, Michael McFail, and Alex Orta. Weather research and forecasting model 2.2 documentation: A step-by-step guide of a model run. Technical Report FIU-SCIS-2007-09-02, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, August 2007. [ bib | .pdf ] In Summer 2007, we, a group of four undergraduate students and three graduate students under the supervision of Dr. Masoud Sadjadi at School of Computing and Information Science (SCIS) of Florida International University (FIU), started an effort on gridifying the Weather Research and Forecast (WRF) code. During this process, it became apparent to us that we needed a better understanding of the code's functionality before we start the gridification process. As the available documentation on WRF was not targeted for developers like us who would need to modify the code operation to adapt it to a grid computing environment (and not just adding a new physics model, for example), we were pushed to search through lines of the WRF’s FORTRAN and C code to discover how this code was actually functioning; especially, in parts such as domain decomposition and network interactions among the nodes. Due to the large and complex nature of the WRF code, documentation of the program flow proved necessary. With more time and thought, we decided to start a documentation effort to be useful not only for us, but also for other interested in learning the WRF operation in more dept. This guide should help developers understand basic concepts of WRF, how it executes, and how some of its functions branch into different dimensions. We hope that by the time our audience finishes reading this document they will have gained a strong understanding of how WRF operates. Keywords: Weather Forecast and Reserach, WRF, Fortran, MPI, OpenMP, Grid Computing. [6] S. Masoud Sadjadi, Luis Atencio, and Tatiana Soldo. Trap/j v2.1: An improvement for transparent adaptation. Technical Report FIU-SCIS-2007-09-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, May 2007. [ bib | .pdf ] With the advent of mobile, pervasive, and grid computing, software systems must be designed to dynamically adapt to changes that might occur in their runtime environments. Certainly, careful system design and modeling are key factors for systems to be complete. However, as technology changes and new forms of technology continue to emerge, predetermining all possible scenarios in which a system may be running is nothing short of impossible. These issues can be addressed with a tool called TRAP/J (Transparent Reflective Aspect Oriented Programming in Java. However, the first implementation of this tool performs poorly on demanding applications, severely lacked usability, and provided very limited support for adaptation. In this paper, we will be addressing various issues in the first implementation of TRAP/J and we have developed a new version, TRAP/J v2.1, which is aimed at providing better performance and usability over the original TRAP/J. TRAP/J 2.1 is focused on improving the performance of the generation and adaptation phases of Transparent Adaptation and keeping in mind ease of usability. This will allow a decision support system –in our case, a user— to benefit from a user friendly, interactive, web based Composer Interface through which new behavior can be inserted into an application remotely at runtime or startup time. In addition, it has a Generator Interface that allows users to choose which classes they wish to make adaptable. Keywords: TRAP/J, Dynamic Adaptation, Java, Pervasive Computing, Grid Computing. [7] Onyeka Ezenwoye, S. Masoud Sadjadi, Ariel Carey, and Michael Robinson. Grid service composition in bpel for scientific applications. Technical Report FIU-SCIS-2007-08-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, August 2007. [ bib | .pdf ] Grid computing aims to create an accessible virtual supercomputer by integrating distributed computers to form a parallel infrastructure for processing applications. To enable service-oriented Grid computing, the Grid computing architecture was aligned with the current Web service technologies; thereby, making it possible for Grid applications to be exposed as Web services. The WSRF set of specifications standardized the association of state information withWeb services (WSResource) while providing interfaces for the management of state data. Key to the realization of the benefits of Grid computing is the ability to integrate WS-Resources to create higher-level applications. The Business Process Execution Language (BPEL) is the leading standard for integrating Web services and as such has a natural affinity to the integration of Grid services. In this paper, we share our experience on using BPEL to integrate, create, and manage WS-Resources that implement the factory pattern. We use a Bioinformatics application as a case study to show how BPEL can be used to orchestrate Grid services. The execution environment for our case study comprises the Globus Toolkit as the Grid middleware and the ActiveBPEL as the BPEL engine. To the best of our knowledge, this work is among the handful approaches that successfully use BPEL for orchestrating WSRF-based services and the only one that includes the discovery and management of instances. Keywords: BPEL, Grid Computing,WSRF, OGSA-DAI, Service Composition. [8] Onyeka Ezenwoye, S. Masoud Sadjadi, Ariel Carey, and Michael Robinson. Orchestrating wsrf-based grid services. Technical Report FIU-SCIS-2007-04-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, April 2007. [ bib | .pdf ] Grid computing aims to create an accessible virtual supercomputer by integrating distributed computers to form a parallel infrastructure for processing applications. To enable service-oriented Grid computing, the Grid computing architecture was aligned with current Web service technologies. Thereby making it possible for Grid applications to be exposed as Web services. The WSRF set of specifications standardized the association of state information withWeb services (WS- Resource) while providing interfaces for the management of state data. Key to the realization of the benefits of Grid computing is the ability to integrate WS-Resources to create higher-level applications. The Business Process Execution Language (BPEL) is the leading standard for integrating Web services and as such has a natural affinity to the integration of Grid services. In this paper, we share our experience on using BPEL to integrate, create, and manage WS-Resources that implement the factory/instance pattern. Keywords: BPEL, Grid Computing, WSRF, OGSA-DAI, Grid Service Composition. [9] S. Masoud Sadjadi, J. Martinez, T. Soldo, L. Atencio R. M. Badia, and J. Ejarque. Improving separation of concerns in the development of scientific applications. Technical Report FIU-SCIS-2007-02-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, February 2007. [ bib | .pdf ] High performance computing (HPC) is gaining popularity in solving scientific applications. Using the current programming standards, however, it takes an HPC expert to efficiently take advantage of HPC facilities; a skill that a scientist does not necessarily have. This lack of separation of concerns has resulted in scientific applications with rigid code, which entangles non-functional concerns (i.e., the parallel code) into functional concerns (i.e., the core business logic). Effectively, this tangled code hinders the maintenance and evolution of these applications. In this paper, we introduce Transparent Grid Enabler (TGE) that separates the task of developing the business logic of a scientific application from the task of improving its performance. TGE achieves this goal by integrating two existing software tools, namely, TRAP/J and GRID superscalar. A simple matrix multiplication program is used as a case study to demonstrate the current use and capabilities of TGE. Keywords: Transparent Grid Enablement, High Performance Computing, TRAP/J, GRID superscalar. [10] Onyeka Ezenwoye and S. Masoud Sadjadi. Trap/bpel: A framework for dynamic adaptation of composite services. Technical Report FIU-SCIS-2006-06-02, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, June 2006. [ bib | .pdf ] TRAP/BPEL is a framework that adds autonomic behavior into existing BPEL processes automatically and transparently. We define an autonomic BPEL process as a composite Web service that is capable of responding to changes in its execution environment (e.g., a failure in a partner Web service). Unlike other approaches, TRAP/BPEL does not require any manual modifications to the original code of the BPEL processes and there is no need to extend the BPEL language nor its BPEL engine. Furthermore, TRAP/BPEL promotes the reuse of code in BPEL processes as well as in their corresponding autonomic behavior. In this paper, we describe the details of the TRAP/BPEL framework and use a case study to demonstrate the feasibility and effectiveness of our approach. Keywords: TRAP/BPEL, generic proxy, self-management, dynamic service discovery. [11] Onyeka Ezenwoye and S. Masoud Sadjadi. Robustbpel-2: Transparent autonomization in aggregate web services using dynamic proxies. Technical Report FIU-SCIS-2006-06-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, June 2006. [ bib | .pdf ] Web services paradigm is allowing applications to electronically interact with one another over the Internet. BPEL facilitates this interaction by providing a platform with which Web services can be integrated. Using RobustBPEL-1, we demonstrated how an aggregate Web service, defined as a BPEL process, can be instrumented automatically to monitor its partner Web services at runtime and replace failed services via a generated proxy. While in the previous work the proxy is statically bound to a limited number of alternative Web services, in this paper we extended the RobustBPEL-1 toolkit to generate a proxy that dynamically discovers and binds to existing services. Further, we present details of the generation process and the architecture of dynamically adaptable BPEL processes and their corresponding dynamic proxies. Finally, we use two case studies to demonstrate how generated dynamic proxies are used to support self-healing and self-optimization (specifically, to improve the faulttolerance and performance) in instrumented BPEL processes. Keywords: Web service monitoring, BPEL processes, dynamic proxies, self-healing, self-optimization, dynamic service discovery. [12] Tao Li, S. Masoud Sadjadi, Juan Carlos Martinez, Lokesh Sasikumar, and Manoj Pillai. Data mining for autonomic system management: A case study at fiu-scis. Technical Report FIU-SCIS-2006-03-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, March 2006. [ bib | .pdf ] Over the years, the advancements in science and technology have led to the increased complexity in computing systems. The systems are thus becoming increasingly more complex with growing number of heterogeneous software and hardware components, increasingly more difficult to monitor, manage and maintain. As a result, it is not a trivial task to provide high performance, high dependability, and high manageability for such computing systems. In this paper, we first present an integrated data-driven architecture for computing system management and then present a case study on Autonomic System Manager: a software system we developed that is currently being used by the system and network administrators of School of Computing and Information Sciences (SCIS) at Florida International University (FIU). Keywords: Data mining, autonomic computing, self management, self protection, network and system management, anomaly detection. [13] Onyeka Ezenwoye and S. Masoud Sadjadi. Transparent autonomization in aggregateweb services using dynamic proxies. Technical Report FIU-SCIS-2006-02-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, February 2006. [ bib | .pdf ] We recently introduced RobustBPEL [13], a software toolkit that provides a systematic approach to making existing aggregate Web services more tolerant to the failure of their constituent Web services. Using RobustBPEL, we demonstrated how an aggregate Web service, defined as a BPEL process, can be instrumented automatically to monitor its partnerWeb services at runtime and replace failed services via a generated proxy. While in the previous work the proxy is statically bound to a limited number of alternative Web services, in this paper we propose an extension to the RobustBPEL toolkit to generate a proxy that dynamically discovers and binds to existing services. Further, we present details of the generation process, the architecture of the dynamic proxy, and finally use a case study to demonstrate how the generated dynamic proxy is used to support self-healing and self-optimization (specifically, to improve the fault-tolerance and performace) in an instrumented BPEL process. Keywords: Web service monitoring, BPEL processes, self-healing, self-optimization, dynamic service discovery. [14] Farshad A. Samimi, Philip K. McKinley, and S. Masoud Sadjadi. Mobile service clouds: A self-managing infrastructure for autonomic mobile computing services. Technical Report MSU-CSE-06-7, Department of Computer Science, Michigan State University, East Lansing, Michigan, February 2006. [ bib | .pdf ] We recently introduced Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of autonomic communication services. In this paper, we propose a model that extends Service Clouds to the wireless edge of the Internet. This model, called Mobile Service Clouds, enables dynamic instantiation, composition, configuration, and reconfiguration of services on an overlay network to support self-management in mobile computing. We have implemented a prototype of this model and applied it to the problem of dynamically instantiating and migrating proxy services for mobile hosts. We conducted a case study involving data streaming across a combination of Planet- Lab nodes, local proxies, and wireless hosts. Results are presented demonstrating the effectiveness of the prototype in establishing new proxies and migrating their functionality in response to node failures. Keywords: autonomic networking, distributed service composition, self-managing system, overlay network, mobile computing, quality of service. [15] Yi Deng, S. Masoud Sadjadi, Peter Clarke, Chi Zhang, Vagelis Hristidis, Raju Rangaswami, and Nagarajan Prabakar. A communication virtual machine. Technical Report FIU-SCIS-2006-02, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, February 2006. [ bib | .pdf ] The convergence of data, voice and multimedia communication over digital networks, coupled with continuous improvement in network capacity and reliability has significantly enriched the ways we communicate. However, the stovepipe approach used to develop today’s communication applications and tools results in rigid technology, limited utility, lengthy and costly development cycle, difficulty in integration, and hinders innovation. In this paper, we present a fundamentally different approach, which we call Communication Virtual Machine (CVM) to address these problems. CVM provides a user-centric, modeldriven approach for conceiving, synthesizing and delivering communication solutions across application domains. We argue that CVM represents a far more effective paradigm for engineering communication solutions. The concept, architecture, modeling language, prototypical design and implementation of CVM are discussed. Keywords: Model driven, communication application, multimedia, middleware, telemedicine. [16] Chi Zhang, S. Masoud Sadjadi, Weixiang Sun, Raju Rangaswami, and Yi Deng. User-centric communication middleware. Technical Report FIU-SCIS-2005-11-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, November 2005. [ bib | .pdf ] The development of communication applications today follows a vertical development approach where each application is built on top of low-level network abstractions such as the socket interface. This stovepipe development process is a major inhibitor that drives up the cost of development and slows down the pace of innovation of new generation of communication applications. In this paper, we propose a user-centric communication middleware (UCM) that provides a unified higher-level abstraction for the class of multimedia communication applications. We investigate the minimum set of necessary requirements for this abstraction from the perspective of next-generation communication applications, and provide an API that exemplifies this abstraction. We demonstrate how UCM encapsulates the complexity of network-level communication control and media delivery. Further, we show how its extensible and self-managing design supports dynamic adaptation in response to changes in network conditions and application requirements with negligible overhead. Finally, we argue that UCM enables rapid development of portable communication applications, which can be easily deployed on IP-based networking infrastructure. Keywords: Multimedia communication applications, user-centric middleware, autonomic computing. [17] Onyeka Ezenwoye and S. Masoud Sadjadi. Composing aggregateweb services in bpel. Technical Report FIU-SCIS-2005-10-01, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, October 2005. [ bib | .pdf ] Web services are increasingly being used to expose applications over the Internet. These Web services are being integrated within and across enterprises to create higher function services. BPEL is a workflow language that facilitates this integration. Although both academia and industry acknowledge the need for workflow languages, there are few technical papers focused on BPEL. In this paper, we provide an overview of BPEL and discuss its promises, limitations and challenges. Keywords: Web services, workflow language, BPEL, business processes, application-to-application integration, and business-to-business integration. [18] Yi Deng, S. Masoud Sadjadi, Peter Clarke, Chi Zhang, Vagelis Hristidis, Raju Rangaswami, and Nagarajan Prabakar. A unified architectural model for on-demand user-centric communications. Technical Report FIU-SCIS-2005-09, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, September 2005. [ bib | .pdf ] The rapid growth of networking technologies has drastically changed the way we communicate and enabled a wide range of communication applications. However, these applications have been conceived, designed, and developed separately with little or no connection to each other, resulting in a fragmented and incompatible set of technologies and products. Building new communication applications requires a lengthy and costly development cycle, which severely limits the pace of innovation. Current applications are also typically incapable of responding to changes in user communication needs as well as changing network infrastructure and device technology. In this article, we address these issues and present the Unified Communication Model (UCM), a new and user-centric approach for conceiving, generating, and delivering communication applications on-demand. We also introduce a prototype design and implementation of UCM and discuss future research directions toward realizing next generation communication applications. Keywords: Unified communication model, software generation, on-demand multimedia communication applications, autonomic computing. [19] Onyeka Ezenwoye and S. Masoud Sadjadi. Enabling robustness in existing bpel processes. Technical Report FIU-SCIS-2005-08, School of Computing and Information Sciences, Florida International University, 11200 SW 8th St., Miami, FL 33199, August 2005. [ bib | .pdf ] Web services are increasingly being used to expose applications over the Internet. To promote efficiency and the reuse of software, these web services are being integrated both within enterprises and across enterprises, creating higher function services. BPEL is a workflow language that can be used to facilitate this integration. Unfortunately, the autonomous nature of web services leaves BPEL processes susceptible to the failures of their constituent services. In this paper, we present a systematic approach to making existing BPEL processes more fault tolerant by monitoring the involved web services at runtime, and by replacing delinquent web services dynamically. To show the feasibility of our approach, we developed a prototype implementation that generates more robust BPEL processes from existing ones automatically. The use of the prototype is demonstrated using an existing loan approval BPEL process. Keywords: ECommerce, web service monitoring, robust BPEL processes, dynamic service discovery. [20] Farshad A. Samimi, Philip K. McKinley, S. Masoud Sadjadi, and Peng Ge. Kernel-middleware interaction to support adaptation in pervasive computing environments. Technical Report MSU-CSE-04-30, Department of Computer Science, Michigan State University, East Lansing, Michigan, August 2004. [ bib | www ] In pervasive computing environments, conditions are highly variable and resources are limited. In order to meet the needs of applications, systems must adapt dynamically to changing situations. Since adaptation at one system layer may be insufficient, cross-layer, or vertical approaches to adaptation may be needed. Moreover, adaptation in distributed systems often requires horizontal cooperation among hosts. This cooperation is not only limited to the source and destination(s) of a data stream, but might also include intermediate hosts in an overlay network or mobile ad hoc network. We refer to this combined capability as universal adaptation. We contend that the model defining interaction between adaptive middleware and the operating system is critical to realizing universal adaptation. We explore this hypothesis by evaluating the Kernel-Middleware eXchange (KMX), a specific model for cross-layer, cross-system adaptation. We present the KMX architecture and discuss its potential role in supporting universal adaptation in pervasive computing environments. We then describe a prototype implementation of KMX and show results of an experimental case study in which KMX is used to improve the quality of video streaming to mobile nodes in a hybrid wired-wireless network. Keywords: [21] Philip K. McKinley, S. Masoud Sadjadi, Eric P. Kasten, and Betty H. C. Cheng. A taxonomy of compositional adaptation. Technical Report MSU-CSE-04-17, Department of Computer Science, Michigan State University, East Lansing, Michigan, May 2004. [ bib | www | .pdf ] Driven by the emergence of pervasive computing and the increasing need for self-managed systems, many approaches have been proposed for building software that can dynamically adapt to its environment. These adaptations involve not only changes in program flow, but also run-time recomposition of the software itself. We discuss the supporting technologies that enable dynamic recomposition and classify different approaches according to how, when and where recomposition occurs. We also highlight key challenges that need to be addressed to realize the full potential of run-time adaptable software. This survey is intended to be a living document, updated periodically to summarize and classify new contributions to the field. The document is maintained under the RAPIDware project web site, specifically, at http://www.cse.msu.edu/rapidware/survey. Keywords: adaptive software, compositional adapation, middleware, survey, taxonomy, pervasive computing, autonomic computing, computational reflection, separation of concerns, component-based design, aspect-oriented programming, object-oriented programming. [22] Zhinan Zhou, Philip K. McKinley, and S. Masoud Sadjadi. On quality-of-service and energy consumption tradeoffs in fec-encoded audio streaming. Technical Report MSU-CSE-04-16, Department of Computer Science, Michigan State University, East Lansing, Michigan, April 2004. [ bib | www ] This paper addresses the energy consumption of forward error correction (FEC) protocols as used to improve quality-of-service (QoS) for wireless computing devices. The paper also characterizes the effect on energy consumption and QoS of the power saving mode in 802.11 wireless local area networks (WLANs). Experiments are described in which FEC-encoded audio streams are multicast to mobile computers across a WLAN. Results of these experiments quantify the tradeoffs between improved QoS, due to FEC, and additional energy consumption caused by receiving and decoding redundant packets. Two different approaches to FEC are compared relative to these metrics. The results of this study enable the development of adaptive software mechanisms that attempt to manage these tradeoffs in the presence of highly dynamic wireless environments. Keywords: energy consumption, quality-of-service, forward error correction, mobile computing, handheld computer, adaptive middleware [23] S. M. Sadjadi and P. K. McKinley. A survey of adaptive middleware. Technical Report MSU-CSE-03-35, Computer Science and Engineering, Michigan State University, East Lansing, Michigan, December 2003. [ bib | www | .ps | .pdf ] Developing distributed applications is a difficult task due to three major problems: the complexity of programming interprocess communication, the need to support services across heterogeneous platforms, and the need to adapt to changing conditions. Traditional middleware (such as CORBA, DCOM, and Java RMI) addresses the first two problems to some extent through the use of a black-box approach, such as encapsulation in object-oriented programming. However, traditional middleware is limited in its ability to support adaptation. To address all the three problems, {\em adaptive} middleware has evolved from traditional middleware. In addition to the object-oriented programming paradigm, adaptive middleware employs several other key technologies including computational reflection, component-based design, aspect-oriented programming, and software design patterns. This survey paper proposes a three-dimensional taxonomy that categorizes different adaptive middleware approaches. Examples of each category are described and compared in detail. Suggestions for future research are also provided. Keywords: adaptive middleware, taxonomy, computational reflection, component-based design, aspect-oriented programming, software design patterns, static adaptation, dynamic adaptation, quality of service, dependable systems, embedded systems. [24] P. K. McKinley, Z. Zhou, and S. M. Sadjadi. Tradeoffs between QoS and energy consumption in FEC-supported wireless handheld computers. Technical Report MSU-CSE-03-34, Department of Computer Science, Michigan State University, East Lansing, Michigan, December 2003. [ bib | www ] Abstract—This paper investigates the energy consumption of forward error correction (FEC) protocols as used to improve quality-of-service (QoS) for wireless handheld devices. Also addressed is the effect on energy consumption and QoS of the power saving mode in 802.11 wireless local area networks (WLANs). Experiments are conducted in which FEC-encoded audio streams are multicast to multiple HP/Compaq iPAQ handheld computers across a WLAN. The results of these experiments help to quantify the tradeoff between improved packet delivery rate, due to FEC, and additional energy consumption caused by receipt and decoding of redundant packets. Moreover, the results enable the development of adaptive software mechanisms that attempt to manage these tradeoffs in the presence of highly dynamic environments. Keywords: energy consumption, quality-of-service, forward error correction [25] S. M. Sadjadi and P. K. McKinley. Supporting transparent and generic adaptation in pervasive computing environments. Technical Report MSU-CSE-03-32, Department of Computer Science, Michigan State University, East Lansing, Michigan, November 2003. [ bib | http ] This paper addresses the design of middleware to support run-time adaptation in pervasive computing environments. The particular problem we address here is how to support adaptation to changing network connection capabilities as a mobile user interacts with heterogeneous elements in a wireless network infrastructure. The goal is to enable adaptation to such changes automatically and, with respect to the core application code, transparently. We propose a solution based the use of generic proxies, which can intercept and process communication requests using rules and actions that can be introduced to the system during execution. To explore their design and operation, we have incorporated generic proxies into ACT [27], a system we designed previously to support adaptation in CORBA applications. Details of ACT-based generic proxies are presented, followed by results of a case study involving adaptation of a surveillance application in a heterogeneous wireless environment. Keywords: middleware, dynamic adaptation, transparent adaptation, generic proxy, dynamic weaving, quality-of-service, mobile computing [26] S. M. Sadjadi, P. K. McKinley, R. E. K. Stirewalt, and B. H.C. Cheng. TRAP: Transparent reflective aspect programming. Technical Report MSU-CSE-03-31, Computer Science and Engineering, Michigan State University, East Lansing, Michigan, November 2003. [ bib | http | .ps | .pdf ] This paper introduces transparent reflective aspect programming (TRAP), a generator framework to support efficient, dynamic, and traceable adaptation in software. TRAP enables adaptive functionality to be added to an existing application without modifying its source code. To reduce overhead, TRAP enables the developer to select, at compile time, a subset of classes to support adaptation through run-time aspect weaving. TRAP uses aspect-oriented programming and behavioral reflection to automatically generate the required aspects and reflective classes associated with the selected types. At run time, new adaptive behavior can be introduced to the application transparently with respect to the original code. TRAP can be applied to any object-oriented language that supports structural reflection. A prototype, TRAP/J, which has been developed for use with Java applications, is described. A case study is presented in which TRAP was used to enable an existing audio-streaming application to operate effectively in a wireless network environment by adapting to changing conditions. Keywords: dynamic adaptation, aspect-oriented programming, computational reflection, behavioral reflection, adaptive middleware, transparent adaptation, quality-of-service, mobile computing [27] S. M. Sadjadi and P. K. McKinley. ACT: An adaptive CORBA template to support unanticipated adaptation. Technical Report MSU-CSE-03-22, Department of Computer Science, Michigan State University, East Lansing, Michigan, August 2003. [ bib | http | .pdf ] This paper proposes an Adaptive CORBA Template (ACT), which enables run-time improvements to CORBA applications in response to unanticipated changes in either their functional requirements or their execution environments. ACT enhances CORBA applications by weaving adaptive code into the applications' object request brokers (ORBs) at run time. The woven code intercepts and adapts the requests, replies, and exceptions that pass through the ORBs. ACT itself is language- and ORB-independent. Specifically, ACT can be used to develop an object-oriented framework in any language that supports dynamic loading of code and can be applied to any CORBA ORB that supports portable interceptors. Moreover, ACT can be integrated with other adaptive CORBA frameworks and can be used to support interoperation among otherwise incompatible frameworks. To evaluate the performance and functionality of ACT, we implemented a prototype in Java to support unanticipated adaptation in non-functional concerns, such as quality-of-service and system-resource management. Our experimental results show that the overhead introduced by the ACT infrastructure is negligible, while the adaptations offered are highly flexible. Keywords: middleware, CORBA, dynamic adaptation, interoperability, request interceptor, dynamic weaving, proxy, quality-of-service, mobile computing [28] S. M. Sadjadi, P. K. McKinley, and E. P. Kasten. MetaSockets: Run-time support for adaptive communication services. Technical Report MSU-CSE-02-22, Department of Computer Science, Michigan State University, East Lansing, Michigan, July 2002. [ bib | http | .ps.gz | .pdf ] Rapid improvements in mobile computing devices and wireless networks promise to provide a foundation for ubiquitous computing. However, comparable advances are needed in the design of mobile computing applications and supporting middleware. Distributed software must be able to adapt to dynamic situations related to several cross-cutting concerns, including quality-of-service, fault-tolerance, energy management, and security. We previously introduced Adaptive Java, an extension to the Java programming language, which provides language constructs and compiler support for the development of adaptive software. This paper describes the use of Adaptive Java to develop an adaptable communication component called the MetaSocket. MetaSockets are created from existing Java socket classes, but their structure and behavior can be adapted at run time in response to external stimuli. MetaSockets can be used for several distributed computing tasks, including audits of traffic patterns for intrusion detection, adaptive error control on wireless networks, and dynamic energy management for handheld and wearable computers. This paper focuses on the internal architecture and operation of MetaSockets. We describe how their adaptive behavior is implemented using Adaptive Java programming language constructs, as well as how MetaSockets interact with other adaptive components, such as decision makers and event mediators. Results of experiments on a mobile computing testbed demonstrate how MetaSockets respond to dynamic wireless channel conditions in order to improve the quality of interactive audio streams delivered to iPAQ handheld computers. Keywords: adaptive middleware, reflection, aspect-oriented programming, forward error correction [29] Z. Yang, B. H.C. Cheng, R. E. K. Stirewalt, J. Sowell, S. M. Sadjadi, and P. K. McKinley. An aspect-oriented approach to dynamic adaptation. Technical Report MSU-CSE-02-21, Department of Computer Science, Michigan State University, East Lansing, Michigan, July 2002. [ bib | http ] This paper presents an aspect-oriented approach to dynamic adaptation. A systematic process for defining where, when, and how an adaptation is to be incorporated into an application is presented. A formal model for describing characteristics of dynamically adaptive applications is introduced, which enables precise descriptions of different approaches to dynamic adaptation. Keywords: aspect-oriented programming, adaptation, security, group communication [30] E. P. Kasten, P. K. McKinley, S. M. Sadjadi, and R. E. K. Stirewalt. Separating introspection and intercession to support metamorphic distributed systems. Technical Report MSU-CSE-02-1, Department of Computer Science, Michigan State University, East Lansing, Michigan, January 2002. [ bib | http | .pdf ] Many middleware platforms use computational reflection to support adaptive functionality. Most approaches intertwine the activity of observing behavior (introspection) with the activity of changing behavior (intercession). This paper explores the use of language constructs to separate these parts of reflective functionality. This separation and packaging'' of reflective primitives is intended to facilitate the design of correct and consistent adaptive middleware. A prototype implementation is described in which this functionality is realized through extensions to the Java programming language. A case study is described in which metamorphic socket components are created from regular socket classes and used to realize adaptive behavior on wireless network connections. Keywords: Adaptive middleware, reflection, component design, mobile computing, wireless networks, forward error correction.