Saulius Pavalkis

Avatar

I'm Global User Support Manager and Analyst in MagicDraw R&D team for over 10 years with increasing responsibilities. My major expertise area is model-based requirements engineering. I'm the owner of a new Cameo Requirements Modeler product, which has been recently introduced in MagicDraw product line. I hold a PhD in model traceability from Kaunas University of Technology (KTU). I also hold multiple professional certificates: OMG-Certified UML Professional, OMG-Certified Expert in BPM, ITIL V3, OMG-Certified Systems Modeling Professional. I have written multiple research and practical articles in model-based software design. I'm the founder and chief editor of modelling community blog (blog.nomagic.com) dedicated for sharing practical model-based engineering experience.

May 072014
 

You can choose which custom properties are shown on an element shape via the “Edit Compartment” dialog. Perhaps you may wonder if there is a way to set the specific tag values to always display for a particular (stereotyped) element type by default.

An example of such a case is with a requirements element which has Id and Text properties visible by default. Other custom properties such as Risk, Source, and Verify Method are visible in the element specification only.

Requirement

Figure 1. Id and Text properties are visible on shape

Requirement specification4

Figure 2. More properties are visible in specification

It’s not a surprise that there are a few different solutions available in MagicDraw to achieve this

1. Two Stereotypes and “Show Properties When Not Applied” Usage

This is the actual case we have with requirement element. There are two stereotypes. The first one is <<Requirement>> with tag definition Id and Text. All requirements have this stereotype applied by default.

The default symbol property of the requirement Show Tagged Values value is In Compartment. This results in the above behavior where the Id and Text properties are visible on the requirement shape.

For other properties, another stereotype, <<extendedRequirement>> exists. It has the tag definitions Risk, Source, and Verify Method. This stereotype has the DSL customization created with the properties Show Properties When Not Appliedtrue, and Show Properties When Not Applied Limited By Element Type  – Requirement. For more about DSL see UML Profiling and DSL User Guide.

The result is that extended requirements properties are visible by default only in requirement specification, and become visible on the shape once the value of the property is specified.

2. [1] Multiplicity and Default Value Usage

Let’s say you have only one stereotype <<customRequirement>> with multiple tag definitions, from which only ID and Owner shall be visible on shape by default.

To achieve this:

1. Set tag definition Multiplicity value to 1 or to 1..*

2. Set the Default Value property for tag definition. Space character can be used as value also.

Figure 3. Owner and ID properties have Multiplicity and Default Value properties specified to be shown on shape by default

Figure 3. Owner and ID properties have Multiplicity and Default Value properties specified to be shown on shape by default

When the custom requirement is created only the ID and Owner properties are shown on the shape.

Figure 4. ID and Owner properties are shown on shape by default

Figure 4. ID and Owner properties are shown on shape by default

Other custom properties will appear on shape once the values are specified.

3. Hidden Stereotype Usage

In case you do not want to show specific tags, even if they have values assigned, apply the <<InvisibleStereotype>> for the particular tag definition.

Figure 5. ID property with > applied

Figure 5. ID property with <> applied

When custom requirement will be created such property (in our case this is ID property) it will not be shown on the shape regardless of whether a value is specified or not.

Figure 6. Only Owner property is shown on shape

Figure 6. Only Owner property is shown on shape

Figure 7. All properties can be specified in custom requirement specification

Figure 7. All properties can be specified in custom requirement specification

In Summary, we have three methods to control, which cause properties to be shown on shapes by default. The first two methods are used when you need properties to show on shape if the value is created for them. Use the third in the case where you do not want to show specific custom properties, even if they have values assigned.

Apr 282014
 

SmapThe research was carried out at the Jet Propulsion Laboratory (NASA JPL) under a contract with the National Aeronautics and Space Administration and the European Southern Observatory (ESO).

The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model’s logic by performing fault injection simulations, and verify the fault protection system’s logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

INTRODUCTION

The Soil Moisture Active Passive (SMAP) mission will provide global measurements of soil moisture and its freeze/thaw state. These measurements will be used to enhance understanding of processes that link the water, energy and carbon cycles, and to extend the capabilities of weather and climate prediction models. SMAP data will also be used to quantify net carbon flux in boreal landscapes and to develop improved flood prediction and drought monitoring capabilities [6].

Highly complex systems, such as the SMAP Fault Protection system [1], are difficult to develop, test, and validate using traditional methods – Fault protection design has been prone to human error and subject to limited multi-fault, multi-response testing. Traditionally, responses are designed individually because it is not feasible for humans to incorporate all combinations of fault protection events in design or test without a model. It is also expensive to use high fidelity test beds, limiting the scope of the possible combined-response tests that can be performed. To explore new model-based methods of testing and validating fault protection, SMAP Fault Protection logical designs were used to architect a representative SysML behavioral model that was used to exploit fault injection testing and model checking capabilities. Model checking provided a basis for checking fault protection design against the defined failure space and enabled validation of the logical design against domain specific constraints (for example, during ascent the receiver should be on and the transmitter should be off).

The model is transformed to run simulations, create artifacts to be model checked, and to produce the final software implementation.

In order to gain confidence in the validation and verification of the model based design and its implementation the following questions must be addressed:

  • Does the model represent the system?
  • Do the generated artifacts for model checking represent the model?
  • Do the generated artifacts for model checking represent the final software system implementation?

In the context of this paper, simulation is used to validate the model against requirements. It is also assumed that the generated artifacts for model checking represent the model. However, this could be mitigated by comparing the simulation results with the execution of the generated code for model checking. Finally, the code used for model checking is not part of the final software system implementation. Simulation of the model caught (initial modeling and design translation) errors and provided the ability to inject a variety of inputs to test many aspects of the model, leading to confidence in the logical design of the model. It became clear, as more error monitors and responses were added to the model, that it would not be possible to manually run simulations for all of the possible sequences of the model – a model checker is necessary to formally and exhaustively verify the model for all possible sequences.

TOOLCHAIN

The tool-chain consists of: a UML modeling tool (MagicDraw 17.0.4) with SysML plugin and simulation environment (Cameo Simulation Toolkit 17.0.4 which is based on Apache SCXML Engine 0.9), a model-to-text transformation tool (COMODO), and a model checker (JPF6 and JPF7).

MagicDraw is used to model and represent the system in terms of collaborating Statecharts according to SysML 1.3. The model is exported in UML2 XMI 2.x format and then processed by COMODO.

COMODO [3] is a platform independent tool to generate text artifacts from SysML/UML models using Xpand/Xtend technology. For example COMODO can transform SysML State Machine models to Java code which is compliant with JPF’s Statechart project (jpf-statechart) and the final software implementation for different platforms.

Statecharts XML (SCXML) is a W3C notation for control abstraction defining the syntax and semantic for Statecharts execution [5]. Apache Commons SCXML is one implementation of SCXML.

MODELING FOR MODEL CHECKERS

An idea of the overall complexity of the SMAP model is provided in the following figure. The SMAP Fault Protection Engine consists of an Error Monitor Statechart and Response Statecharts. External signals from device Statecharts, such as a reaction wheel, and the mode manager Statechart are input to the Fault Protection Engine.
The size of the model checking state space is a valuable indicator for the complexity of the model. Model checkers are computation and memory intensive. After initial model checking runs were found to take days to exhaustively check a small subset of the SMAP Fault Protection system, it became apparent that model patterns should aim at decreasing the state space. In an attempt to reduce the state space, adjustments were made to the model architecture and the Statechart representation of the fault protection system’s response tiers and response queue.

Figure 1: SMAP model.

Figure 1: SMAP model.

Response tiers define the sequence of actions performed by fault protection system responses. Each subsequent tier of a given response attempts to mitigate a fault with different sets of actions. If a response has more than one tier, subsequent tiers will not be performed until prior tiers fail to mitigate the fault. The error monitor that detects the fault must be re-tripped between each subsequent tier. In cases where all tiers are executed and the fault still exists, the response resets and re-executes its tiers (assuming the response has not been masked).

When system responses are tripped by error monitors, they are placed in the fault protection response queue based on priority: High priority responses are placed in the front and low priority responses are placed in the back. A set of activation rules evaluates the response in the front of the queue and either allows the response begin executing its tiers or, if the activation rules do not pass, denies response execution and places the response back into the queue.

The Statechart in Figure 2 with orthogonal regions provides a straightforward example of a model checking application. The correctness property inserted is to ensure that state B and state E are never active at the same time: assert (inState(B) && !inState(D)).

Figure 2: JPF Model Checking Example

Figure 2: JPF Model Checking Example.

This Statechart is translated to Java using COMODO and the correctness property has been inserted manually into the code as shown in.

Figure 3: Inserting the Correctness Property.

Figure 3: Inserting the Correctness Property.

When JPF was run, it instantly found a counterexample to the assertion and output the error trace and performance statistics shown in the following figure. Following trace #1, error #1 was found because trace #1 defines an existing path that leads to B and E being active together. The statistics show that there is no elapsed time needed to perform this very basic model-checking task.

Figure 4: JPF Output of Error Trace.

Figure 4: JPF Output of Error Trace.

The adjustments made to the SMAP model in order to reduce the state space include: 1) Use of composite Statecharts with orthogonal regions to define each behavior. It was verified that the state space of a Statechart with orthogonal regions is equivalent to a flat state machine since Statecharts are only a notational enhancement of the state machines. However the Statechart representation is more compact and the model is more readable; 2) Guards were placed on transitions wherever possible. The difference in computation time for a model with few guards compared with another model of triple size with many guards was found to be a factor of 5000 (see Table 2). Adding guards to most transitions reduces the complexity of the system to be checked, limiting the number of paths making it quicker to check; 3) The usage of enumeration instead of integer reduced the risks of state space explosion due to unbound variables. In the example of Figure 5 the model checker will consider each increment of the variable t as a new state and therefore execute the else branch until t overflows and wraps around to reach the same initial value.

Figure 5: Integer incrementor.

Figure 5: Integer incrementor.

An additional remedial action that can be used to avoid ending up in the situation of Figure 5 is the usage of assertions to detect potentially unbound variables.

The initial attempt at architecting the Fault Protection model did not consider the limitations of the tool chain. The initial model used complex elements and diagrams such as sequence diagrams, nested logic (hidden If statements), complex state machine model elements (e.g. decision nodes), and global variables. Simulation artifacts began overtaking the model because of the complex elements and nested logic. Additionally, the current version of the SysML to Java code transformation tool is limited to interpreting composite Statecharts, transition guards, signals, and opaque behaviors. Thus, the model architecture was refactored to use explicit logic and simple Statechart elements, leading to a much cleaner and clearer architecture that can be simulated and model-checked.

The fundamental drivers of the modeling task are patterns and practices that lead to efficient model checking (keeping the state space as small as possible). Model checking should allow an exhaustive search to be performed in reasonable time and with acceptable memory consumption. The goals are to make checking of large system models a standard practice that is accessible to a wider audience of engineers and to make using the tool easy by automating the process without requiring highly specialized skills to produce and optimal representation of the system and properties to be checked..

Currently, jpf-statechart (in scriptless mode) does not distinguish between external and internal (created by entry/do/exit behaviors and transition effects) events but checks all combinations of events so no cases of the modeled behavior are missed. However, many paths irrelevant for the specification of the system behavior are explored.

With this distinction, the model checker would have, for every state configuration, only a limited number of internal events (which are possible in a particular state) available when generating events during transition exploration. For example in Figure 6, assuming that e1, e2, e4, e5 are external events and e3, e6 are sent by the behavior of state S1, while e7 is sent by the behavior of state S2, then the model checker could ignore e6 when the state configuration is {S2, S4}.

Figure 6 Multiple Paths to reach a state.

Figure 6: Multiple Paths to reach a state.

Guards are critical in keeping the state space limited. A goal to reduce the state space even more is to further develop jpf-statechart to take into account the internal signals that are sent in behaviors. One possible solution is to annotate trigger methods with states in which they are enabled.

This demonstrates good modeling for model-checking practice: Use guards to encode the knowledge of the model about internal events. Without guards, the model checker would exhaustively explore the complete state space, whereas in the final run-time system the entire state space would never be checked since only a limited number of events will occur at any given moment in time. Adding guards that limit the number of possible transitions results in a two-fold advantage. First, the state space that is explored is drastically reduced, decreasing the time and memory used for model checking. Second, including the guards in the final implementation ensures the system will never end up in an “unexpected” state if an out of order event occurs. In both cases the complexity of the system is reduced, allowing the amount of involved testing to be reduced and off-nominal behavior to be limited (it is worthwhile to investigate how the introduction of model checking affects traditional test strategies). It is important to note that guards also help when doing an initial validation and verification of the model using simulation. For example, in multiple circumstances, the model could not be simulated after adding a guard, which pointed out a model error or bug. The guards also validate the modeler’s assumption on the behavior of the model.

Multiple versions of the fault protection system’s response queue were modeled in order to find a pattern that could be executed and model checked. The response queue was initially modeled using opaque behaviors with ‘if inState’ code inside of states – this actually represents implicit Statechart states that cannot be considered by jpf-statecharts but only by JPF-Core, therefore increasing the state space. Now the response queue is modeled explicitly with multiple nested states and ‘inState’ guarded transitions. It was found that both methods work for simulation and JPF (because JPF can interpret the code inside of opaque behaviors); however because the latter method is explicit (it does not have hidden guards in the code) and limits the state space with guarded transitions, it was chosen for the queue pattern.

This is the first part of the article originally published at http://dl.acm.org/citation.cfm?id=2560583
Image source http://www.jpl.nasa.gov/missions/soil-moisture-active-passive-smap/

Apr 172014
 

Why do MBSE? What is Model Based Systems Engineering? How to define and implement effective process? What are fundamental concepts and enablers of MBSE? INCOSE made brochure, explaining why MBSE should be used instead of paper-based approach. Models are created to deal with complexity. In doing so they allow us to understand an area of interest or concern and provide unambiguous communication amongst interested parties.

Download (PDF, Unknown)

Source: http://www.incoseonline.org.uk/Documents/zGuides/Z9_model_based_WEB.pdf

Apr 042014
 

In order to manage growth, complexity, and demand for resources of mission critical systems, Lockheed Martin Corporation (LMCO) has transitioned to using Model Based Systems Engineering (MBSE) (see the side bar) in large scale. The transition was very successful; but it also required adopting best practices along the way. The newest MagicDraw version provides real–life project capabilities (i.e. Smart packages) out of the box, which will provide further productivity and quality gains supporting configuration management approach.

Challenges Pushing the MBSE Adoption

LMCOMBSE has been adopted for US NAVY submarines combat systems software and hardware configurations management.

Submarine Warfare Federated Tactical Systems (SWFTS) program (see the side bar) provides parallel management of external interfaces to the combat system and internal interfaces between subsystems within the combat system.

In addition, to the complexity of configuration management, the SWFTS model is large. The combat system includes aproximately:MBSE2

  • 35 subsystems from over 20 program offices
  • 2,500 interface requirements
  • 100 services
  • 3,700 model elements for interfaces
  • More than 15,000 relationships between model elements
  • 500,000 model elements.

The scope of the SWFTS systems engineering efforts have increased over time with more parallel changes, more concurrent baselines, thus increasing the Engineering workload.SWFTS

To handle complexity, increase productivity, and save costs, MBSE was adopted to manage SWFTS configurations.

The key issue in applying MBSE was efficiently representing system variation to the systems engineering of product families. This is important both to minimize duplicative data to be maintained and synchronized within the system models, and to minimize the conceptual complexity of the system model.

Configuration Management Solution

To handle the task of dozens of product configurations managed in parallel, with many of those baselines being updated several times a year, LMCO developed a new SysML modeling technique.

It extends the concepts of libraries with SysML Catalogs to bound the complexity of the configuration task, improving the quality and efficiency of the systems engineering process.

Catalogs frame alternative views of the model for the engineer. Usage of catalogs gives ability to utilize the catalog as an active filter of the model:

Constructing catalogs

Figure 1 Constructing catalogs of approved components from libraries of available components

  • Reduces the scope of the library without duplicating the elements.
  • Provides utilization assessments for elements across multiple baselines and baseline configurations.

 As shown in Figure 1, the approved subset of servers from the list of all servers is imported into a catalog for a specific baseline (TI10 or TI12 in the example). Similarly, these catalogs are populated with other hardware components approved for those baselines. Each catalog restricts the scope of the configuration to those components approved for the specific baseline.

LMCO noticed that constructing the baseline system configurations is a technically challenging task. Given the large number of baselines that must be managed, the total number of software and hardware components, interface specifications, etc., used in one or more baselines at any given time is quite large.

For an engineer constructing a new baseline, hunting manually through dozens of server and switch models or tens of hundreds of versions of interface specifications would be so laborious and error-prone as to defeat the productivity and quality objectives of introducing Model Based Systems Engineering to the SWFTS program.

Efficient management of the product configuration process is a challenge in the evolution of any industrial scale product family. The standards themselves are not addressing this problem in a scalable fashion. In addition, existing UML/SysML modeling tool support for variation points appeared to be inadequate for an industrial problem of this magnitude.

Configuration Management Mechanism

It was necessary to create some mechanism or plugin for appropriately restricting the scope of objects available to the engineer constructing or modifying a given baseline. If the totality of servers, switches, displays, etc., included in the hardware model is considered as a library of candidate hardware components, what is needed is a catalog containing only those components, which are approved for baseline use in the configuration at hand.

Constructing a system configuration from catalogs of approved baseline components

Figure 2 Constructing a system configuration from catalogs of approved baseline components

The process of constructing a baseline from a set of catalogs is shown in Figure 2. In this case, a variant configuration from the TI10/APB09 baseline is being constructed for a specific class of submarines. The TI10 hardware catalog is open in the browser on the left side of the screen capture (1), and specific servers are being configured into processing racks that will be installed on the submarines (2).

LMCO identified that the tool support shown in Figure 2 is critical to the productivity and quality gains projected for the conversion of SWFTS from a document-based to a model-based systems engineering process. The unique solution was implemented by LMCO as a No Magic Inc. MagicDraw plugin.

LMCO predicted that a similar user-interface feature is likely to be imitated by other tool vendors as a natural side-effect of competition.

No Magic Inc. responded with a highly flexible capability to have a criteria dependent package → Smart package.

No Magic response – Smart Packages Usage for System Configuration Catalogs

Smart package based Catalog dynamically include components

Figure 3 Smart package based Catalog dynamically include components

A Smart package is a special collection of model elements. An element is included in the smart package automatically if the element meets the set of criteria defined by the user. For example the user can create a group “TI14 Catalog” with the criteria “all components with import relations incoming from package – TI14 Catalog”.

Note: If you no longer need the contents of a smart package to be dynamic, you can simply freeze it.

Figure 4 The most powerful in the modeling tools industry query engine used for Smart packages

Figure 4 The most powerful in the modeling tools industry query engine used for Smart packages

Smart packages aggregate relevant elements so that you can:

  • Browse, navigate, list, and discover these elements in the Containment tree.
  • Narrow the scope in boththe Find dialog and the Element Selection dialog.
  • Define dynamic row and column scopes in dependency matrices. For example after tagging a component with TI14, the component is automatically included into the group “TI14 Catalog” and thus is added to the dependency matrix where this smart package is defined as scope.

Smart packages are query based. The newly enhanced query engine (Figure 4) is extremely flexible and is now the most powerful in the modeling tools industry. The criterion can be as simple as a UML relationship and as complex as an Object Constraint Language (OCL) expression. You can define the following flavors of criteria: Simple criteria, OCL expressions, Meta chains for navigation through chains of properties, and Java code.

The criterion can also be any combination of the items from the preceding list. In addition, the Query engine is parameters based so one query result can be the parameter of another query, scope, or type, without any limits.

Detailed Solution for Creating Dynamic System Configuration Catalogs

Figure 5 shows construction of the TI14 catalog of approved components from TI Hardware library of available components: (1) Smart package – the catalog Cabinet, Server, etc., use the query engine to aggregate required components dynamically. (2) Once the query is specified, the smart package –TI14 catalog content is created and updated automatically. (3) Some components, such as DELL R710, are imported into catalog TI14 manually by drag and drop. (4) Ownership of individual components, such as the DELL R710, is not changed and allows reuse in multiple catalogs.

Figure 5 Constructing catalogs of approved components from libraries of available components

Figure 5 Constructing catalogs of approved components from libraries of available components

Alternative solution is to construct catalogs based not on the import relations, but on properties values (1) Figure 6. Find query (1) is used to search for catalog TI14 components and show them in TI14 Smart package (3).

Figure 6 Labels based construction of catalogs

Figure 6 Labels based construction of catalogs

Conclusions

Challenges

  • Manage the Complexity Faced by Systems Engineers
  • Manage High Variability Between Platforms
  • Maximize Reuse Between Baselines
  • Improve the Quality and Efficiency of the Baseline Configuration Process

Benefits

  • Usage of MBSE found bugs in previous baselines
  • 13% Savings between SE and MBSE
    • 25% in Capability Definition
    • Another 10% over DOORS in Baseline Management
  • Savings Seen in 4th Year
    • 2 Years to Implement Model
    • 1 Year Transition Overlap with Current Process

Solution

  • Adopt MBSE to Enable a More Efficient System Engineering Process
  • Provide Intuitive MBSE tools to enable Engineers to Develop Complex Systems with Maximum Reuse
  • No Magic Inc. responded to LMCO and other customers working on complex systems configuration management with highly flexible capability to have criteria dependent package → Smart package.
  • Smart package capability with the most powerful in the modeling tools industry query engine enables efficient management of the product configuration process in any industrial scale product family. It is major mean to the productivity and quality gains for the conversion from a document-based to a model-based systems engineering process

Results

  • MBSE applied to an existing system achieved greater productivity and improved quality of existing program.
  • Hierarchy of Models Supporting TEAM SUBMARINE Engineering
  • Reduced duplication and inconsistency of element definitions
  • Developed Libraries and Catalogs to improve the quality and efficiency of the baseline configuration process


Mar 262014
 

Two years ago, the French National Education decided to bring a Model Based Systems Engineering (MBSE) approach to the schools with a focus on engineering and natural sciences. Each School (Fr. Lycee) chose their own solution to implement this approach.

The Lycees based their choices on several criteria, including ease of use, and standard compliance to SysML. Baudouin Martin led a nationwide evaluation group, and their top recommendation was the No Magic, Inc., MBSE solution consisting of MagicDraw with the SysML plugin. Approximately 300 Lycees now use this solution for MBSE education. Baudouin Martin also provides courses for instructors, nationwide.

Below you can find MBSE courses recorded by Baudouin Martin:

The Lycees developed a course book, using where MagicDraw and SysML as the core tool.

More about No Magic, academic and research programs can be found at http://www.nomagic.com/services/academic-research.html

Mar 062014
 

jira_logo_landingMagicDraw and Atlassian JIRA are popular products, which usually can be spotted in software collection of enterprise. So a natural and common question rises, how they can work together?

In most cases JIRA serves as an issue tracking system with the ability to have custom workflows, approvals, states, etc. MagicDraw in most cases is used as a repository for architecture, or a single source of well-structured information, be it requirements, business processes, software, or system design.

Long story short:

  • A link to the capability, bug or change request (let’s call it an item) in JIRA can be added to the requirement, design element, test case, package, or diagram (let’s call it element) in the MagicDraw modeling project.
  • Vice versa, starting from MagicDraw v17.0, a link to a model element can be created and added into a JIRA item. Click on this link and MagicDraw will start with the required project, and the element will be selected in it.

Link from issue tracking software to MagicDraw

How to create the link to the model element?

You can copy a MagicDraw project element URL to a clipboard and share it with others as a quick reference to model elements. To copy a project element URL, do either of the following:

  • Select Copy URL from the element shortcut menu in the Containment tree to copy the URL to a model element

OR

  • Select the element symbol in a diagram and click Edit > Copy URL on the main menu to copy the URL to element symbol

copy url

Note: A custom solution is available with usability enhancements:

  • Copy link as hyperlink: Copy ULR for JIRA
  • Copy multiple selected elements URL in single action

Where to add the link in JIRA?

In the below picture we can see the capability recorded in JIRA. The link to the model element is added  into the description field. It is up to the process whether it will be an Analysis Sub-task or directly a capability, which will have the link. Either the link be added into description field, as we can see here, or a dedicated field will hold the link.

hiperlink to elementHow to open a MagicDraw project and select a particular element in it from the link?

There are a couple of methods available:

1. Double click on the link in JIRA. Once clicked on the link, MagicDraw  starts, an automatic connection to Teamwork Server is performed, the project is opened, and the linked element is shown as selected.

selectedNotes:

  • In the local project (not on Teamwork server) case, MagicDraw will started with request to point to required project.
  • Starting from v17.0, custom URL “mdel://” is registered into windows registry on MagicDraw installation.
  • Most recently installed MagicDraw will start on click on link.
  • Your Internet browser could have a different version of MagicDraw associated with its protocol.

2. Copy the URL, click the Open Element from URL (from MagicDraw File main menu) command and the element will be highlighted in the Containment tree or in the diagram.

open url

Link from MagicDraw to issue tracking software

There are a couple of methods to add the JIRA item link to the model element:

  1. Just drag and drop the JIRA item URL on any model element. The hyperlink will be created.
  2. Use a dedicated property in the element to hold the JIRA item ID.

link to jiraLeave a comment if you have different integration needs, would like to share personal insights, or would just like to say hello, we are always happy to hear from you!

Feb 172014
 

Make requirements first class citizens in the modeling world.

Requirements are gathered and managed in dedicated requirements tools. When it comes to requirements refinement and integration with business, software, and system architecture, different requirements interchange formats are used. It can be comma separated value, MS Excel, Word, or XML. These are nonstandard ways, which bring drawbacks. It is clear that need for dedicated common format exists. This is why the German automotive industry started the open, non-proprietary format for requirements exchange development.

Starting with the upcoming v18.0, all MagicDraw-based Cameo Suite products will support ReqIF import as part of the new Cameo Requirements Modeler plugin.

What is ReqIF?

ReqIF is the XML based international standard for requirement data exchange, standardized by the Object Management Group (OMG). It has solid recognition in the industry and adoption by many requirements management tool vendors. It is used to exchange requirement information between different tools and tool chains.

“For requirement analysis, ReqIF is the same as Unified Modeling Language (UML) for modeling – it is the most popular and dedicated requirements interchange format.”

ReqIF Support in MagicDraw

ReqIF Importer imports and updates (previously imported) requirements in models with the following capabilities:

  • Import process includes the ability for custom mapping with the option to import all data and dynamically create properties.
  • Update process includes change management support with requirements status identification. After import, new, changed, updated or obsolete requirements are identified with the ability to check the impact of changes.

main diagram good

Once imported, requirements become first class citizens in the modeling world. That means they can be:

  • Integrated with other models: business, software and systems architecture, test cases, and tool chains: PLM tools (e.g. Teamcenter), CAD tools (e.g. Catia), and others. This enables requirements driven design and communication of changes with all stakeholders.
  • Reviewed with visualization in diagrams, tables, matrices and structure maps.
  • Analyzed with built-in and custom validation suites, coverage metrics, traceability.
  • Collaborated with a global modeling project’s repository, supporting collaboration inside a project, change and configuration management, multisite support.
  • Simulated with OMG standard-based model execution, debugging, animation, and user interface prototyping supporting framework.
  • Published with MS office and Open Office docs, Web-based reports, with the ability to have custom reports incorporating required data.

ReqIF Importer Features

1. Import data that originated in a wide variety of tools*

021714_2253_Requirement3.png

* Import tested with: IBM Rational DOORS 9.4, 9.5, Next Generation, Polarion, PTC Integrity, Siemens TC, and other ReqIF 1.0 compatible data sources.

2. Update existing data. It is possible to create relations to any other model element, e.g. test cases, or architectural components, to realize total traceability as required by the processes. On update, all custom relations are left untouched.

3. Change status identification (updated, new, unchanged, or obsolete)

status5

4. Requirements structure and structure changes support

Structure1

5. Physical requirements remove action

remove action2

6. Status and summary notification message

021714_2244_Requirement7.png

7. Custom mapping

mapping2

8. Dynamic properties discovery, no need to determine what data is in ReqIF file, all the properties can be imported

9. Omit the data you don’t need to import

10. Rich text support

rich

About Cameo Requirements Modeler plugin

The Cameo Requirements Modeler Plugin is the central part for simple model based requirements support. It implements the OMG SysML standard requirements part, which has proved itself in system engineering. With this plugin requirements are now available for all domains: business, software, and enterprise architecture. The Plugin provides the means to import, create, and store requirements with the rest of the model; as well as trace, analyze, and keep consistent with other models. The Requirements Interchange Format (ReqIF) makes it open for interchange. It is also easily extendable and customizable.

More About ReqIF

ReqIF Recognition

The group working on the initial release of ReqIF consists of the ProSTEP iViP Association, Atego Systems GmbH, Audi AG, BMW AG, Continental AG, Daimler AG, HOOD GmbH, IBM, MKS GmbH, PROSTEP AG, Robert Bosch GmbH, and Volkswagen AG.

ReqIF Sources

Dec 192013
 

121913_1429_BPMN2BasedP1.pngThe KG group was recently restructured into three different companies: AB “Kauno Grūdai” together with AB “Vilniaus paukštynas” and AB “Kaišiadorių paukštynas,” and introduced ten managed business activities. The group is actively expanding and each year new business activity is added to the company’s portfolio.

Management understood that employee process knowledge and process alignment with operations was vital for business growth. The existing level of knowledge did not allow effective satisfaction of the customers’ demands.

The company was experiencing these problems:

  • Different level of knowledge among employees affected quality of work
  • Employees working in a complex, interconnected work processes environment lacked knowledge of related processes, goals and priorities
  • It was not clear how to keep employees’ knowledge in a required level
  • There were no process self-control mechanisms

Solution For Training New Employees

The company established a project for business processes and responsibilities identification, employee training and certification. The project was part of systematic changes. Program goals:

  • Document existing business processes
  • Identify roles responsibilities
  • Review and improve existing processes
  • Ensure that processes work and are followed
  • Use documentation for employee training and evaluation
  • Measure employee behavior and decisions

Implement Process Modeling

The Process Modeling phase was implemented via a highly usable and standard-compliant BPMN 2 solution from No Magic – Cameo Business Modeler (CBM). Process gathering was performed by business analysts. Consulting took place with company middle managers. Documented processes were improved by following these criteria:

  • Process supports business strategy and goals
  • Process has defined Key Performance Indicators (KPI’s)
  • Process supports customers’ needs
  • Process optimizes resources

More than 200 processes and more than 1000 tasks were documented in a one-year period.

Documentation Generation

An internal website available to all employees was generated from a process model. A documentation form was defined based on the specific needs of the KG Group using a KG Group template. Documentation generation was automated and scheduled to be generated directly to the company web server (see how) creating an ongoing, up-to-date Knowledge center. This was possible using the highly flexible and adaptable Velocity Template-Based CBM Documentation Generation capability.

121913_1429_BPMN2BasedP3.png 121913_1429_BPMN2BasedP4.png

The Company Knowledge Center provides the following information:

  • Detailed process documentation
  • Responsibilities of each role and clear view of processes
  • Active hyperlinks to other related company documents
  • Knowledge evaluation

121913_1429_BPMN2BasedP5.png

Knowledge Evaluation

Once processes were identified, the Knowledge Center became the single place to access them. But it is clear that the ability to access knowledge is not enough to ensure processes are known, understood and correctly followed by employees. Moodle – Open Source Course Management System based knowledge evaluation was begun in the Knowledge center to evaluate employees’ knowledge of business processes. Tests became mandatory, and they are being used for employees’ qualification evaluations at this time.

Solution Influence Today

121913_1429_BPMN2BasedP6.png

Today, with the help of Knowledge Center, all business processes are improved and followed by employees. The project was part of systematic changes that took place in the company; the knowledge is used for related projects as well: Oracle E-Business Suite business management system and IBM Cognos BI planning and reporting systems are integrated into the company infrastructure

Conclusions

Challenges

  • Recent restructuring of the company
  • Different level of knowledge between employees
  • Employees working in complex, interconnected work processes environment were lacking knowledge about related processes, goals and priorities
  • It was not clear how to keep employees knowledge in required level
  • There were no process self-control mechanisms in place
Solution

  • Process modeling using highly usable and standard-compliant BPMN 2 solution from No Magic – Cameo Business Modeler (CBM).
  • Moodle – Open Source Course Management System based knowledge and evaluation center was made available for all employees
  • Knowledge Center is always up-t- date with the newest processes, roles and task information due to automatically generated documentation with flexible Velocity Template based CBM documentation generation capability
Benefits

  • Agreements and knowledge is preserved
  • Processes and role responsibilities are clear. Processes have owners
  • There is now agreement regarding processes between managers
  • Place to test your knowledge
  • It is clear what processes need to be improved to reach required KPI’s
Results

  • More than 200 processes with more than 1000 tasks are currently documented. It took one year. Project is 50% completed
  • Business processes are reviewed, improved and followed by employees
  • Knowledge and Certification Centers are always up-to-date.
  • The source of documented processes is used for related projects: Oracle E-Business Suite business management system and IBM Cognos BI planning and reporting systems integration into company infrastructure

About KG Group

AB “Kauno Grūdai” together with AB “Vilniaus paukštynas” and AB “Kaišiadorių paukštynas” currently form one of the most modern and economically strong business mergers in Lithuania (Europe), namely the “KG Group” group of companies.  The company is engaged in the processing of agricultural products and has divided its activities into seven types of business: flour, crop production, combines fodder and premixes, protein supplements, raw materials, pet food, veterinary formation products. KG has over 3000 employees.

Source (http://www.kauno-grudai.lt)

Dec 172013
 

00This article gives details of common undesirable situations users have encountered while working with a Teamwork Server repository containing multiple projects. All of these situations can easily lead to more serious problems such as: data loss, duplicated and inconsistent data, and lost time from cleaning up errors.

We suggest an easy way to identify and remedy issues in the early stages using the new MagicDraw capability – Project Usage Map.

Introduction

No Magic’s Teamwork Server is software that allows more than one user to work with the same model. The model is stored centrally in the Teamwork Server repository and every modeler working with either MagicDraw, Cameo Business Modeler, or Cameo Enterprise Architecture may collaborate on the same project.

The Project Usage Map is a live visual graph that represents Teamwork Server project usages as well as identifies potential problem areas.

The Project Usage Map allows for representing projects and their dependencies in two views:

  • All Projects view that shows all projects and all the dependencies among them.
  • Individual project view that shows a particular project along with other directly and indirectly used modules.

Using the Project Usage Map you can easily do the following:

  • Identify, analyze, and validate dependencies among projects (for example, you can find out easily all the projects, wherein a particular module is used).
  • Identify cyclic dependencies among projects.
  • Identify and fix inconsistent dependencies among projects.

Figure 1. Project Usage Map – complete repository view

Clear and Valid Teamwork Repository – Complexity Addressed

Problem

If you are developing a large model that has several dependent parts, it is advisable to split it into several modules (a module is a project which has shared packages which other projects can use). Partitioning enables reusability of components in different projects and may improve performance on very large projects when modules are loaded selectively. Also, different users can work on their own projects which are part of another bigger project. Users can be assigned with different access rights in each module and versions of each module are tracked separately in Teamwork Server. When the number of users grows, the number of projects and usages between them grows too. Large numbers of usages makes it difficult to understand and validate them from a single project perspective.

It becomes important to manage the Teamwork Server repository efficiently. You would like to identify, analyze and validate dependencies between projects. If you have multiple projects in the repository, this task becomes difficult or even impossible without dedicated tools.

Solution

MagicDraw provides the Project Usage Map to identify, analyze and validate dependencies between projects.

Note. You only need to be connected to your Teamwork Server repository from MagicDraw in order to invoke Project Usage Map.

The Project Usage Map shows project usages (i.e. directly and indirectly used projects which are visible from the selected project) in a graph. Graph nodes are individual projects and graph edges are usages. The Usage Map has two views:

  • All Projects view that shows all projects and all the dependencies among them.
  • Individual project view that shows a particular project along with other directly and indirectly used modules.

After reviewing the entire repository and identifying problems you can dive in to an individual project view to view its usages in detail.

2

Figure 2. Handling complexity. Project Usage Map from plain projects list to single project view

Both views have the capability to filter information by projects and categories. Your filter settings will be saved so you can continue your analysis next time you launch Project Usage Map.

All Cyclic Usages Are Valid

Problem

When a project decomposition is used, the project is split into smaller projects. You can benefit from this decomposed project – you need to load only small part of the project instead of all of it. This reduces complexity and even improves performance.

In addition to this, you would not expect and most likely not care to load the remaining parts of the main project each time you open a single part of it. This can happen if you have cyclic usages i.e. project parts are using the main project. Typically, such usages increase complexity and reduce performance. Figure 3 and Figure 4.

Also, if the project takes part in a cycle, it can’t be reused as a totally independent part in another project. Other projects taking part in the cycle will be automatically used as well.

Cycles are often a symptom of unintentional usages, created unknowingly by the user.

They can often cause further problems – such as inconsistent mounting points or module version usage inconsistencies.

3

Figure 3. Cyclic usages

4

Figure 4. Cycles through unshared usages

Sharing usage shows project parts which are exposed to other projects. In the above example (depicted in the Figure 4), usages are represented as dashed lines and sharing usages are represented as solid lines. Shared packages of Project C are visible for Project A because project B is using and sharing Project C. But Project B shared packages are not visible for Project D because Project A is only using (but not sharing) Project B.

Project A depicted in Figure 4 is a project from whose perspective the cycle exists, i.e. the cycle is formed out of sharing and using and ordinary relations.

Solution

The Project Usage Map automatically identifies cycles and highlights them in the Repository View. You can then open a Usage Map for the suspected projects participating in a cycle to analyze it more closely and if necessary – break usages that cause cycles.

5

Figure 5. Highlighted cycle

Consistent Version, Branch Usages, and Mount Points

Problem

It is not unusual for one project to be used by multiple other projects if it is a library or profile for example. It is also likely that a single project will use some other projects that already use that same library or profile.

You will therefore get multiple usages of the same project. This is normally not a problem. However several problematic cases can occur:

  1. You are using different versions of the same project in your main project (see Figure 6).

6

Figure 6. Inconsistent project version usage

  1. You are using a version from the trunk and branch of the same project in your main project (see Figure 7).

7

Figure 7. Inconsistent project branch usage

  1. You are mounting (mount – the other project usage in a particular package of the main project) the used project in different packages in your main project (see Figure 8).

8

Figure 8. Inconsistent mount points

All of the above cases are inconsistencies which may be difficult to understand without a specialized means. All of them could cause user confusion, loss of time and even lost data in some cases (careless editing in case of read/write modules).

Solution

The Project Usage Map highlights these inconsistencies. You can then open projects with inconsistent usages and fix them by unifying the used project version, branch, or mounted package information.

9

Figure 9. Inconsistent mount usage

All modules are Required and used

Problem

When the number of projects in the repository grows it is common for some of the projects to become outdated or not used anymore. You would prefer removing them BUT you are not sure if they are not used by some other, still-active project.

Solution

The Project Usage Map highlights unused modules. Based on this information, you can move all of them into the deprecated category or remove them entirely from the repository.

10

Figure 10. Unused modules

Conclusion

  1. We conducted an overview of potential repository problems which usually occur when the number of projects and users working with them grows large. Potential problems:
    • Unclear usages between the projects
    • Cyclic usages
    • Inconsistent version, branch usages, and mount points
    • Unused modules
    • Not confirmed usages
  2. We introduced solutions to resolve these problems
  3. We did a basic overview of the means to address these problems – Project Usage Map
  4. We showed that the Project Usage Map provides the ability to identify, analyze and validate relations between projects quickly

Let’s imagine you have analyzed the Project Usage Map results. They are graphical and visual however, you would like to save them not only as a picture, but also into model form – to be able to create your own validation rules to check the usages and their properties. Also, you may want to compare the usages you have now with the ones you will have in the NEXT two months. Or additionally, you nay wish to compare an actual and a typical map to find inconsistencies. In order to accomplish this, use action to export the Project Usage Map directly to the model and everything is saved.

11

Figure 11. Export to model

References

[1] Project Usage Map online demo is available at http://www.nomagic.com/support/demos.html

[2] Project Usage Map in MagicDraw User Manual is available at http://www.nomagic.com/support/documentation.html

Note: Project Usage Map is v17.0.3 capability.

Nov 132013
 

It is a common need to have a review process of models. Requirements for such a process could be:

    1. Issues that are found by a reviewer should be directly related to the model element or diagram.
    2. It should not be possible for a reviewer to change the model elements / diagrams.

Solution

review linkMagicDraw has the ability to create link into model elements or diagrams. This is a good way to uniquely indicate the reviewable part of the model. Following this link can not only select required elements in project, but also start MagicDraw and open the required project (if using Teamwork Server).

The question is where to store review information to make it easy accessible and manageable. The simplest but the most solid and long lasting solution is to have a change management system, e.g.: Bugzilla (free), Atlassian JIRA, or a similar system and store review comments with links to the model as review task.

More information about ability to add and follow links to the model can be found at MagicDraw User Manual, Section “Copying/Opening Element URLs”.

It is important that the reviewer may not accidentally edit the model. There are a couple of solutions for this:

  • Disable edit permission for this user for particular projects if Teamwork Server is used.
  • Provide reviewer with no cost MagicDraw Reader edition; this edition can only read the model, but not modify it.

Some other review methods are possible. Feel free to share your experience and needs.