Pete Ianace

Avatar

Jun 062014
 

First let me start with a simple analogy.orchestra

Let’s first imagine a real orchestra. What would it sound like without a conductor, likely resulting in just a lot of annoying noise? By adding a conductor what was annoying can quickly become very enjoyable. Now we get a bit more techie but remember the role the Conductor plays as we go along.

At a very high level a crucial aspect of SOA is service orchestration. Enterprise systems and integration projects designed according to SOA principles depend on successful service orchestration. Finding a platform with enhanced service orchestration capabilities, then, is a high priority for enterprises looking to build their systems according to SOA. Before going on let’s make sure we are on the same page when it comes to SOA, or Service Oriented Architecture.

SOA is an approach to developing enterprise systems by loosely coupling interoperable services – small units of software that perform discrete tasks when called upon – from separate systems across different business domains. SOA emerged in the early 2000s, offering IT departments a way to develop new business services by reusing components from existing programs within the enterprise rather than writing functionally redundant code from scratch and developing new infrastructures to support them. With SOA, functionalities are expressed as a collection of services rather than a single application, marking a fundamental shift in how developers approach enterprise architecture design.

To get a better understanding of service orchestration, let’s take a look at a bank loan example. A loan broker wants to make a loan request on behalf of a customer and uses an automated Loan Request Service. The broker accesses the Loan Request Service in the enterprise system to make the initial loan request, which is sent to an orchestrator (conductor) that then calls and invokes other services in the enterprise, partner systems and/or the cloud to process that request. The individual sub-services involved in the loan request include a service to obtain credit scores from a credit agency, a service to retrieve a list of lenders, a service to request quotes from a bank service, and a service to process quotes with the data from the other services. Together, the orchestrated services comprise the Loan Request Service, which then returns a list of quotes from potential lenders to the broker who made the original request.

As the above example illustrates, service orchestration is a fundamental aspect of successfully implementing SOA. In a truly service oriented architecture, new applications are created by new orchestrations of existing services – not by writing new code.

If it were only that simple

From the surface, service orchestration and SOA are relatively simple concepts. For enterprises faced with integration challenges, skyrocketing IT budgets and increasingly

complex infrastructures, building new applications with granular and reusable software components is an understandably attractive approach to creating more agile and competitive systems and reducing time to market.

Service orchestration and SOA, however, can be difficult to achieve without the right set of tools. In its early days, CTOs of large companies eagerly adopted SOA and went about implementing it with a rip and replace model. Such an approach resulted in high financial costs as well as major time investments since it often required developers to orchestrate services programmatically (i.e. write new code), defeating the ultimate purpose of adopting SOA.

What was needed was a simpler and more flexible way to perform service orchestrations and implement SOA. The enterprise service bus (ESB) emerged as the go-to mechanism for service orchestration and SOA. Now there are a number of ESB platforms in the market today but if you buy in to the fact that embracing SOA eliminates writing code and allows for extensive reuse of components in existing programs, why stop there? I would suggest an ESB platform that could do all that has been mentioned so far plus let you business stake holders, IT stake holders, and IT operations stakeholders all be on the same page and eliminate false starts and figure pointing would be panacea (Super Conductor). If you have an open mind and would like additional details just ask by leaving a comment.

Image source Brussels Philharmonic https://www.flickr.com/photos/samsungtomorrow/8165527944/

Apr 242014
 

ICT

Business is struggling when it comes to IT innovations.

According to a Gartner report, at least $2.68 TRILLION in total spending and 80% to 85% of IT budgets is used to Keep the Lights On. The cost of maintenance has never had more zeroes. That’s a lot of money that could have been used to deploy new customer facing solutions.

Money wasted on maintenance is a sad enough story, but there is a second rail of inefficiency that is even more troubling: IT Backlog. It is based entirely on the accumulation of new service or change requests made by business units that IT has a hard time delivering. This failure to deliver is due to limitations ranging from a lack of budget to a lack of resources and skills.

That is one of the primary reasons IT and business have a problem getting on the same page.

E2EThe way companies are currently dealing with their backlogs and technical debt is not working. It is an unsustainable business model that will render IT departments infective and irrelevant – relegated to being a cost center and unable to help companies innovate.

But there is one alternative that will shrink your yearly total cost of ownership and free valuable resources to innovate.

Don’t bother reading on if you have a closed mind or feel you already have all the answers.

A unique middleware platform deployed in hundreds of global accounts. This platform reduces total cost of ownership by as much as 50% annually and allows for rapid deployment of new business services in a fraction of the time of traditional approaches.

This platform puts you in the driver’s seat and will reduce your total cost of ownership. This platform, called the Cameo E2E Bridge is a very different approach, 100% model driven, giving all stakeholders a visual platform to define requirements, eliminating all traditional coding thereby all but eliminating misunderstanding between business and IT. The Bridge allows clients to implement new business requirements and all associated technical improvements in their existing IT landscape transparently, rapidly and cost-effectively. One platform that provides a 360 degree transparent business solution that allows a client to leverage their legacy systems, streamline IT operations, improve business workflow and allow for rapid deployment of new customer facing applications.

Aligned

Model driven solution eliminates misunderstanding between business and IT

 Change made simple with the Cameo E2E Bridge

  • Learn from real case studies – We have numerous case studies that provide very specific ROI results
  • Keep what’s working – change everything else
  • Choose an effective feasibility study (POC)
  • Gain self confidence due to fast project success
  • Modernization trough controllable incremental steps

 If you want to learn more just email me at pete@nomagic.com or put comment here.

Apr 112014
 

Big_DataOntology

This document contains my views on the subject and I have used some source data found on the web (Wikipedia). Comments on the subject are very welcome.

An ontology formally represents knowledge as a hierarchy of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts.

Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework.

As it relates to the Big Data trend:

Ontology claims to be to applications what Google was to the web. Instead of integrating the many different enterprise applications within an organization to obtain, for example, a 360 degrees view of customers, Ontology enables users to search a schematic model of all data within the applications. They extract relevant data from a source application, such as a CRM system, big data applications, files, warranty documents etc. These extracted semantics are linked into a search graph instead of a schema to give users the results needed.

Ontology gives users a different approach in using enterprise applications, removing the need to integrate the different applications. It allows users to search and link applications, databases, files, spreadsheets, etc. anywhere. The product of Ontology is very interesting because in the past years a vast amount of enterprise applications for various needs and with various requirements have been developed and used by organizations. Integrating these applications to obtain a company-wide integrated view is difficult, expensive and often not without risks.

Why is it important?

It eliminates the need to integrate systems and applications when looking for critical data or trends.

How is it applied and what are the important elements that make it all work?

Ontology uses a unique combination of an inherently agile, graph-based semantic model and semantic search to reduce the timescale and cost of complex data integration challenges. Ontology is rethinking data acquisition, data correlation and data migration projects in a post-Google world.

Enables the Semantic Web

The Semantic Web

The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.

While its critics have questioned its feasibility, many others argue that applications in industry, biology and human sciences research have already proven the validity of the original concept.

The main purpose of the Semantic Web is driving the evolution of the current Web by enabling users to find, share, and combine information more easily. Humans are capable of using the Web to carry out tasks such as finding the Estonian translation for “twelve months”, reserving a library book, and searching for the lowest price for a DVD. However, machines cannot accomplish all of these tasks without human direction, because web pages are designed to be read by people, not machines. The semantic web is a vision of information that can be readily interpreted by machines, so machines can perform more of the tedious work involved in finding, combining, and acting upon information on the web.

The Semantic Web, as originally envisioned, is a system that enables machines to “understand” and respond to complex human requests based on their meaning. Such an “understanding” requires that the relevant information sources be semantically structured.

The Semantic Web is regarded as an integrator across different content, information applications and systems. It has applications in publishing, blogging, and many other areas.

Often the terms “semantics“, “metadata“, “ontologies” and “Semantic Web” are used inconsistently. In particular, these terms are used as everyday terminology by researchers and practitioners, spanning a vast landscape of different fields, technologies, concepts and application areas. Furthermore, there is confusion with regard to the current status of the enabling technologies envisioned to realize the Semantic Web.

Semantic Web solutions

The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language(OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts.

These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases, or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research.

Components

The term “Semantic Web” is often used more specifically to refer to the formats and technologies that enable it. The collection, structuring and recovery of linked data are enabled by technologies that provide a formal description of concepts, terms, and relationships within a given knowledge domain.

The Semantic Web Stack illustrates the architecture of the Semantic Web. The functions and relationships of the components can be summarized as follows:

  • XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within. XML is not at present a necessary component of Semantic Web technologies in most cases, as alternative syntaxes exists, such as Turtle. Turtle is a de facto standard, but has not been through a formal standardization process.
  • XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.
  • RDF is a simple language for expressing data models, which refer to objects (“web resources“) and their relationships. An RDF-based model can be represented in a variety of syntaxes, e.g., RDF/XML, N3, Turtle, and RDFa. RDF is a fundamental standard of the Semantic Web.
  • RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
  • OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. “exactly one”), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes.
  • SPARQL is a protocol and query language for semantic web data sources.
  • RIF is the W3C Rule Interchange Format. It’s an XML language for expressing Web rules which computers can execute. RIF provides multiple versions, called dialects. It includes a RIF Basic Logic Dialect (RIF-BLD) and RIF Production Rules Dialect (RIF PRD).

Current state of standardization

Well-established standards: