<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1907245749562386&amp;ev=PageView&amp;noscript=1">

The Leonardo Blog

Customised Model Release Cycle Management in ARIS

 Since the first release of ARIS Connect, the concepts of publishing in ARIS have changed and lightweight workflows were introduced to provide basic support for the governance of process models. These new features have brought new and interesting experiences. However, feedback from ARIS customers with unique needs for rigorous process model governance indicated that more is needed than the lightweight workflows—like, for example, a workflow for publishing models in isolated repositories to ensure consistency for reporting and analysis. Although the gap left by the lightweight workflows can be filled by a fully-fledged ARIS Process Governance engine, budgets are often a barrier. On the other hand, manual administration (e.g. the handling of models across several repositories without the support of automation) requires a fair bit of maintenance. Solution Overview To address these challenges, a customised approach is proposed; and this article introduces a semi-automated solution through the programming and reporting capabilities in ARIS.   Figure 1 – Elements of Customised Model Release Cycle Management   The solution includes a few elements as described in Figure 1. There are four different roles that are responsible for advancing a semi-automated workflow through the invocation of designated Release Cycle Management (RCM) reports. These roles are designed for a typical scenario, and can be changed and updated according to a customer’s specific requirements. Two repositories are used: one for housing, developing, reviewing and approving process models; and another for the released process models. The latter, the production repository, is available for operational users to consume models once released and transferred from the development repository. It is also available to process approvers to review feedback for revision should changes emerge in the process context. A set of reports carry out actual RCM tasks, and inform the relevant roles with support from an internal email system. A set of records is logged in raw format every time a model status is changed for potential later reporting. This logged information includes: date/time, the RCM task performed, and process model information, etc. Transitions of model status The model status, which is identified by a model attribute, is essential to determine the status of process models in the release cycle. It is also the basis for the RCM tasks to carry out appropriate activities, inform relevant roles, and determine the next status.   Figure 2 – Transitions of model status Model status is set to “In Progress” as soon as a new model is created. Once the model is ready for review by an appropriate reviewer, the modeller runs the RCM task “RCM – Request Model Review”. The task changes the model status to “To be reviewed”, and sends an email to the reviewer. The model is then reviewed and, if the model needs more attention, the reviewer rejects it by running the RCM task “RCM – Reject Model”. The model status is then changed back to “In Progress” and an email is sent to the modeller with relevant comments. Alternatively, if the model review was successful, the reviewer runs the RCM task “RCM – Request Model Approval”. The model status then changes to “Reviewed”.   Figure 3 – Implicit merge of released models The process owner is now triggered by an email sent by the RCM task to perform the last review. The model can be rejected by the owner, or released to the production repository. The process owner, therefore, can run both RCM tasks: RCM – Reject Model or RCM – Release Model. In the latter case, the RCM task changes the model status to “Released”, locks the model in the development repository, and implicitly merges the model to the production repository. The production repository will have the same structure as the development repository and houses only released models. The status “To be revised” is set when process owner has received feedback from operational users, or other parties, about certain defects in the process. As operational users are viewers in ARIS Connect, it is possible to inform the process owner by commenting on the model within ARIS Connect. The process owner then evaluates the feedback, and is able to run the RCM task RCM – Request Model Revision if the process needs to be revised. From this point, further work is carried out to formalise a solution to address the reported feedback. Once the concerning model has been updated with the agreed changes, the process owner runs the RCM task RCM – Request Model Update. The model is then unlocked, and the model status changes to “In Progress”. Relevant parties receive emails about the changes, which starts a new RCM cycle. Solution Summary This RCM solution offers a practical and flexible approach at a reasonable cost—reasonable, that is, compared to the investment that comes with the introduction of APG. As the automation is implemented by ARIS reports, it is possible to call these reports from both THICK and THIN clients (even by ARIS Connect viewer users), which means that all types of client are able to participate in the RCM activities if required. This flexibility of programming also allows custom semantic checks to be incorporated in case in-built semantic checks fail to meet requirements. This pragmatic approach for semantic checks allows them to be connected to certain RCM stages. As it is a customised solution, it can be adapted to future changes. The solution supports the participation of operational users to feedback from the context in which the processes run. Normal RCM solutions are encapsulated in a BPM Office. Although ARIS Connect is used in this sample scenario, the solution can be adapted to support a combination of ARIS Design Server and ARIS Publisher with similar results.    

BPMN Myths Debunked with Stephen White

In this interview with Stephen White, who was closely involved in the developed of the BPMN language, we discussed the myths that been swirling around the internet about BPMN. More specifically, we talk about some crowd-sourced myths and weaknesses that people had been discussing on the BPMN wikipedia page. Sandeep Johal: The first myth is that there is ambiguity and confusion in sharing BPMN models. Stephen White: To me this more of a tool vendor issue, perhaps. A lot of it just had to do with mobility or quality of the import export capabilities of the tool. I've soon tools that are very good at it, and other tools that are not as good at it. When you share a BPMN model, you might have issues importing it into a different tool. You might have to do some work after that, but I think that is a vendor issue. The specification itself gives the XML schema or metamodel  behind it so that tools can do that. There were a couple of minor issues and I think that OMG has been addressing that. There's a committee that does interchange formats, and they've been working on that, and showing demonstrations at conferences. I think any of the technical issues are being solved, and I think it's mainly up to the tool vendors to support that . Sandeep: Next myth ... That BPMN does not allow for routine work. Stephen: Don't really understand this at all. I think that's kind of what one of it's main capabilities is ... is to be able to create simple straight through processes, a series of activities. We got ways of doing looping based on data or based on a list of items. Like if you have an invoice and you have a bunch of items in that, you can do a loop that supports or that manages all the different items on the list. I think BPMN does this very well and kind of what it was designed for. Sandeep: Next myth ... BPMN does not support knowledge work. Stephen: It does support some knowledge working capability. Our case management processes are unstructured. When BPMN was created back in the early 2000s, those kind of unstructured processes were out in the business world. We knew about them, but weren't able to get in all the things that we needed to do to support that. We created a subprocess called an ad hoc process where you could do the activities in any order that you want, and create a condition to say when you're done. This was kind of a step in that direction, but it wasn't complete. In version 2, we didn't add enough to it at that point. In the meantime, OMG also created another standard called case management model and notation, CMMN, to kind of fill in the gaps there. Unfortunately, it's a separate spec so if you want to do unstructured processes, you go to CMMN. If you want to do the structured ones plus maybe a little bit of unstructured, you go to BPMN. I think this in the long run could be addressed by BPMN 3.0 where we can consolidate that. We can use one tool, one model, to create all the variations of business processes.  Sandeep: Converting BPMN models to executable environments is impossible. Stephen: I don't see that at all. I think that's really up to business process management tool. There are tools out there that do this. You can go into a BPMN tool, import a business BPMN model, and then create, make it executable. You can do that now. They're lots of tools that do that. It's really handled by the BPMN tool. I worked at IBM. They do that. Other tools do that, so I think this myth should be busted. Sandeep: There is no support for business rules. Stephen: First off, BPMN isn't intended to do business rule or decision modeling. It's intended to do business process modeling as we discussed. It does provide hooks into these other tools or other models. For example, there's a task in BPMN called a business rule task. It specifically was added so that BPMN tool could use that task to interact with a decision engine or business rule engine, come back results, and get it back to the process. BPMN has the technical capabilities to support it, but it wasn't intended to do that kind of modeling. Sandeep: BPMN is way too complicated. Stephen: This is kinda been a long standing complaint. It was there even before we finished the first specification. I kind of get the complaint. When we started building BPMN, we were intending it to be for business people. It had to be simple enough for business people to do and understand. We didn't want it to be very technical looking. It had to be graphical, etc. It had to be simple, but all the people in the room would have had years of experience with actual business processes. We had been consultants. We had trained people.  We had understood what the issues are for our business people out in the world, and real business processes are complicated. If you look at one company, they need a special kind of loop. You go to another company, they need a transaction or something. You go to another company, they have some other requirement. We add those things up, there's a lot a little details that have to be added to the business process. Real business processes are complex, so it has to be complicated but it has to be simple at the same time. We knew this going in, and so we came up with an approach to try to deal with that. That approach was to build some simple building blocks. For example, we have 3 main objects. We have circles for events, rectangles for activities, and diamonds for gateways. Those are the basic elements, so there's 3 things. You can see them. They look very easy. You can create models with them, and you have that simplicity. To add the complexity, you can add then variations of those things. You can add different types of tasks, different types of gateways. There we add the different markers to tell people what is this particular type of task, what is this for,  what do those behaviors mean. With the basic building blocks, you can build up the complexity within the simple structure. That was the intent, and that was the approach we took. I guess it's up to individual people to decide whether we were successful in that approach. Sandeep: Finally, last question is what about BPMN 3.0? Do you know anything about it? Stephen: At this point, I don't think there's any major effort in OMG  to do 3.0. I would like to see it done eventually, and I think it will be. To be honest, for tool vendors to add a new support to new standards, it is a lot of work. The vendors out there that created version 2 kind of want the stability in the marketplace for awhile. You want to be able to build up the customers they have, and they also want to gather more requirements over the time. We'll know more about what is needed for a version 3. A couple of the things that we mentioned earlier, certainly we need a beefed up support for case management or unstructured type processes. We could add lower level processes at doing service level modeling. Those are the model inside of a tasks. How do you implement those kind of things? There are things at both high and low levels that could be added to BPMN in version 3. Unfortunately, I don't know exactly when that'd happen. When it does, I certainly would like to be involved.  

Shifting business operations to an ‘as-a-service’ delivery model

As SaaS solutions become commonplace in several industries, the market has felt the effects. IDC research shows that SaaS technologies are projected to constitute a quarter of all new enterprise software purchases by 2016, while PWC estimates that SaaS delivery will make up approximately 14.2 percent of all software spending. Overall, the entire SaaS market is projected to expand at a compound annual growth rate of 21.3 percent over the next two years.” As a consultant working within a number of large and medium sized organisations over the past several years, I’m used to seeing common problems being solved over and over again, everywhere I go. Some big and some small, some more necessary than others and some with varying degrees of ‘fit for purpose’ tweaks. After all, every business is different and every implementation needs to fit the business context around what it supports. Shift toward ‘as-a-service’ However, there is a shift taking place and more and more businesses are starting to see the benefits of moving towards “as a service” type arrangements. The transformation isn’t necessarily a new one; the introduction of web based email and corporate social networks have become a staple of the modern organisation, as well as support and maintenance team products such as Sharepoint, Dropbox and Skype.  All of these functions within the business which have begun the transformation to Software as a Service offerings have one thing in common which confuses the bigger picture. They generally still fall into the IT or technology bucket of the organisation. They perform well serving their single purpose but very often they support a technical role within the business - after all they are still technical tools. Business Function/Operations ‘as-a-service’ The bigger piece of the pie which is yet to be realised by large business is that the ‘as a service’ offering shouldn’t be limited to internal tech centric applications. The transformation should be steering towards the business function and operations ‘as a service’. That is to say, the foundation on how that business actually conducts itself and generates revenue.  It very often still dominates business projects, and projects take time. Time costs money, and money is often the thing which is trying to be made or saved when projects are kicked off.  Occasionally, we still the response from project teams and management which is to throw people at the problem. But more people equals more churn and faster turnaround time – and we all know how that one ends. The project blows out and you’re likely no closer to achieving the goal set out to in the first place.  I’m talking specifically about the technical delivery of projects, and even more specifically about the system integration component of technical project delivery – the thing which enables the business to achieve its outcomes, service its customers and generate revenue. When I talk about the difficulty of successfully implementing projects which have a large integration requirement, one of the main issues that arises consistently (n.b. this could be due to a potential disconnect between responsibility and understanding) is that it is underestimated just how specialised systems integration is. Considering the people required to deliver the role on the project, reflect on the complex role for an integration specialist. It requires someone with an array of technical know-how across varying applications, with enough knowledge to understand how different systems work and what they are capable of handling. Projects additionally require that same someone to demonstrate an understanding of business functions, use cases and context in order to be able to implement appropriate solutions to business problems. In order to do that effectively, they also need to be a product specialist, and have a deep understanding of the very tool, or tools, which will be used as the integration platform of choice. It’s a tough ask, and in the grand scheme of things it’s not surprising that these people are both expensive and difficult to find. A pain point for many project managers is ‘Where do you start when you need to find someone with the right skills for the job, with experience in the appropriate industry, within the allowed budget and available when you need?’ Benefits can be truly realised for many businesses to de-risk and implement the move towards ‘delivery as a service’. Here are benefit we see from taking such as shift: Reduce IT expenditure / cost overheads and barriers associated with everything that comes with integration projects. Speed up delivery, and clear the path for far more efficient implementation teams. Improved project alignment and operation efficiency.  The first step in the ‘as a service’ movement has been taken by many businesses, and has reached the technical support functions. It’s now time to consider the bigger picture and broaden the possibilities of what can be achieved by paving the way for integration delivery to be provided as a service as well.

Is Efficient, Cost-Effective Integration Possible?

An analysis on the next major paradigm shift. The goal of most business is to grow revenue, whilst reducing costs and increasing profits - sounds simple enough.   The problem is, we often have to invest a substantial amount of capital in the business only to get a return on that investment over a number of years.  It is this investment that is usually an inhibiter to a business adopting a best of breed approach resulting in sub-par solutions full of technical debt which is then difficult to maintain.  The business gets bound to key staff, infrastructure that becomes obsolete, software that requires constant upgrades and excessive ongoing ’business-as-usual’ costs.   As modern businesses attempt to become more efficient through the use of technology, they sometimes hedge due to the risk/ROI ratio that includes a large ticket price and a moderate time to market.   They essentially see the benefits like opening up their ‘digital channel’, or making their core systems more cohesive and the business more efficient.  However, they can’t justify the unguaranteed ROI over a multi-year return period.  It’s a problem faced by Tier-n companies as they are constantly under pressure to reduce IT costs and until now their options have been limited to some form of outsourcing.  Anyone who has worked in the industry knows that it does not work and quality is the victim.  It’s a strategy mostly enforced by short term thinkers who need to hit short term KPI’s and who create a problem for the next person to fix.  It’s time for a paradigm shift in thinking how we can reduce cost and maintain quality and even improve it.   Let’s look at this considering an analysis of 4 different models.  We’ll assume that a company or business has decided it can achieve process improvement through systems integration, or they just need to reduce IT costs, and that includes the yearly integration bill.  It might seem odd to bundle these two opposing motivations into the same motivation for change – but in essence they are the same: Positive IT outcomes that help improve the business at a reasonable cost. What options do these organizations have: Enterprise Integration Software - Mature, fully featured established integration platforms which are the preferred option, however the higher license costs make these an expensive option Opensource software - Whilst this option is appealing due to the lower license fee (opensource rarely means ‘free’), the business will most likely spend the majority of any license costs savings on building the appropriate framework and establishing a platform  A Custom Solution - Many tier 2 or 3 organizations feel they can get away with developing a custom solution to suit their needs.  What often happens is the final solution is barely functional and lacks the inherent platform functionality often required to meet minimal requirements or regulatory requirements  Managed Integration Delivery as a Service (MiDaaS) - The paradigm shift. A complete subscription based managed service.  This option gives you the full feature stack of the Enterprise Software at an affordable price, lower overheads and a quicker time to market. Let’s take a look at where each of these options may work: Enterprise Traditional approach with large overheads, establishment costs, large license and support fees, and are generally tied to larger transformation projects.  This results in maintaining staff and capability over a longer period. Opensource The perception that opensource is freeware is a misconception.  These platforms do have a licensing component, and there is also the issue of resourcing with skilled people, and the software itself having the inherent frameworks of the Enterprise software.  This may result in cheaper licensing costs, however, it will most likely result in a more expensive development cost.  It also does not solve the issue of maintaining capability, staff and infrastructure overheads. Custom Solutions From the seeds of a custom solution, I give you technical debt and key resource reliability and constraint.  Custom solutions are good if you are tweaking your spreadsheet, or writing a small piece of code to make you irritating daily task easier to do, but in the realm of system integration, there is always a techie who thinks they can build their own integration platform.  This is OK if you have 2 systems doing point to point and 1 or 2 integrations and you don’t want to expose those systems for consumption elsewhere.  All custom solutions very quickly outgrow their use as they soon discover that logging, message affinity, auditing etc., are pretty important things.  What seemed like a quick time to market at the time soon becomes a functionally limited highly customized piece of software that 1 person knows how to fix… Managed Integration Delivery as a Service (MiDaaS) Has the horsepower of the Enterprise software Lower Total Cost of Ownership Makes the organization technology agnostic No technical debt will be carried by the business Key resources/Capability maintenance/Staff overheads are no longer a problem Subscription based pricing includes all infrastructure, design, build, test, maintenance, monitoring and alerting are all included Reduces yearly cost of integration by a significant amount Local account presence Quality assured and consistency of service Upgrades, maintenance, monitoring and alerting are all included  Summary In today’s world we are seeing a shift to the cloud and a form of managed services that see organizations infrastructure being managed by 3rd parties, but the actual work being done on the applications on that infrastructure is still managed in-house.  What I’m talking about is managing the complete Integration service from Design to BAU.  Sure all of the other approaches have their place for the moment because existing systems, teams, IT managers etc. are all in place and can’t see how to make the move to a fully managed service.  However, if we can make the move to the cloud, it shouldn’t take much to nudge us further along. We are already seeing companies move to this model as they can see the immediate benefits and cost savings they can realize for their business.  This shift will continue at an increased pace once other companies see the benefits being realized by their competitors. The paradigm shift is here.  

11 Benefits of Customizing ARIS for Your Organization

ARIS (Architecture of Integrated Information Systems) is an approach to enterprise modelling. It offers methods for analysing processes and taking a holistic view of process design, management, workflow, and application processing.  The ARIS concept is the foundation  for  the ARIS Toolset software system that supports the modelling (ARIS) The ARIS Design tool has become widely popular as a powerful business-process modelling tool. One reason for the suite’s popularity is its configurability to suit individual business’s requirements’s requirements and environments. The evaluation module includes the ability to implement custom reporting and scripting. The ARIS toolset (referred to hereafter as ARIS) is one of the leading tools for Business Process Management, produced and supported by the vendor Software AG. There are many features of ARIS in the Evaluation section. It is comprehensive and includes reporting, macros, transformations, semantic checks, and queries. Business Process Modeling (BPm) is the activity of representing the processes of an enterprise so that the current ones may be analysed and improved. Enterprise modelling is the abstract representation, description and definition of the structure, processes, information, and resources of an identifiable business, government body, or other large organisation (Enterprise Modeling). BPm and Enterprise modeling are typically performed by business analysts and managers seeking to improve process efficiency and quality. The need to model a business process to provide improvements through information technology is a common driver. As a data repository, the re-use and relationships between data makes ARIS more than just a sophisticated graphic tool. There is no point having all the data collected in one place if it can’t be outputted into formats that the business can benefit from. Also, there is no use collecting data if it is not reliable and holds the integrity of the data. Benefits of Customizing ARIS Configuration / Conventions A powerful part of ARIS is the ability to customise the conventions to suit the business’ requirements by changing the method, filters and templates. ARIS provides a set of standard reports, but these are configured to the typical ARIS configuration with SAG formatting and branding. The flexibility of ARIS is that it provides an inbuilt compiler with the Code View of the Reports module to write custom ARIS reports based on an ARIS JavaScript derivative. User-defined reporting will use the ARIS data specifically as required by the business. Uniform reporting based on custom configuration/conventions Corporate branding/custom layouts and formatting as per business communications protocols Import/update ARIS data quickly and in bulk to reduce repetitive modelling Increase data integrity by using custom scripts to evaluate and analyse the data Quality assurance – easier quality checking, ensuring the process to ‘model’ is adhered to by the modellers (e.g. ensure all VACD functions have assigned EPCs) Include non-ARIS business experts via distribution of reports as files for review outside ARIS Automatic & Scheduled reporting – event triggered scripts (e.g. triggered when a model is saved) or time triggered scripts (e.g. triggered at midnight every night) Consistent and uniform modelling via import scripts (e.g. FAD creator with custom layout) Business reporting – shows data in report formats in one file rather than multiple models for those who prefer less visual representations Statistics – shows data in spreadsheets and tables Graphical reporting – shows the models/objects as graphics

    Related Posts