Welcome!

Server Monitoring Authors: Liz McMillan, Carmen Gonzalez, Ken Schwaber, JP Morgenthal, Pat Romanski

Related Topics: Weblogic, Microservices Expo, IBM Cloud, Artificial Intelligence, Server Monitoring

Weblogic: Article

The Value of Inter-Domain Infrastructure Technology for SOA

Optimizing the value of SOA

In my recent predictions for 2009, I pointed out:  

"There will be a larger focus on inter-domain SOA technology, or highly scalable and secure middleware technology that will provide scalable service and information access between the instances of SOAs within the enterprise, and perhaps intercompany as well. The fact is that much of the SOA solutions out there can't scale much past a single problem domain, thus this technology will become key to the strategic success of SOA."

As SOA moves from the project level to the enterprise, SOA architects and practitioners quickly realize the need to consider common services and data management issues. Today we seek the right approaches and the proper enabling technology and standards to provide our enterprises with a common scalable and secure mechanism that ensures all instances of SOAs within the enterprise have the technology-independent infrastructure they need in support of the business. In essence, the architects need the freedom to select whatever best-of-breed technology is right for their project requirements and still be able to depend upon common service and information management infrastructure that is technology and vendor-agnostic. 

Enterprise Service Buses, or ESBs, held the initial promise to provide core information integration and services sharing for the enterprise. This did, indeed, work at the micro-domain, or project levels. However, the use of ESBs to address the larger issues around core and common enterprise-side SOA infrastructure quickly proved to be technology that was unfit for the purpose when working within the micro-domains. While ESBs are great at core integration and service sharing efforts within sub-domains, they need a more specialized and highly scalable infrastructure to drive SOA at enterprise, or macro-domain levels. 

The fact is that an inter-domain technology is required to address the core issues with service and information sharing that needs to occur in and between SOA problem domains, this according to a recent report released by Forrester. Moreover, Joseph Natoli of Intel, in his recent blog post, highlighted the need for a "Right-Sized" SOA: "Key to this advice is making architecture and technology decisions which are purpose fit and ‘right sized' to the key business problems of connecting SOA across the enterprise..."   Indeed, this technology is strategic in nature, and becomes the most important link within the enterprise, thus it must support features and functions that live up to the expectations of business.

The best way to look at the value of this technology is to understand the return on investment (ROI) that may be gained by using this approach and technology. What are the core benefits to the business, or, more important, what is the cost incurred by the business if this technology is not leveraged? It's an interesting journey to the business realities. 

Approaching ROI
The most practical approach is to look at the cost of moving forward with the existing approaches and technology solutions. Once that's understood, it's then helpful to look at the impact of leveraging the inter-domain SOA technology, and determine the impact on the effectiveness of the architecture, and thus the impact on the business. It should be noted that all enterprises and domains are different, and you'll have to adjust your approach and data points accordingly. 

Since this is a very complex issue with many different variables that are dependent upon the specific issues of the enterprise or problem domain, it's helpful to use a more simplistic but valid approach to calculating the ROI of leveraging this technology.

To that end we need to estimate a few key pieces of data:

  • Architectural inefficiencies of the "As Is" approach and technology solution in terms of impactful dollars, or cost, per year on the business.
  • Architectural efficiencies as the "To Be" approach and technology solution in terms of impactful dollars, or value, per year on the business, minus the cost of the "To Be" or inter-domain SOA technology. 
  • Degree of complexity, which is typically (but not always) based on the number of services and database attributes under management. 
  • Degree of reuse, which is based upon the number of services reused within and outside of the micro-domain, or single SOA instance. 
  • Soft issues such as the ability for the new approach and technology to bring efficiencies to the IT solutions that increase customer satisfaction, employee morale, business intelligence, or other things that are not as easy to define as hard numbers. 

The Cost of "As Is"
Those who approach SOA today have a tendency to approach a strategic architectural problem using tactical approaches and technology. The end result is an instance of an efficient architecture that lacks holistic value for the enterprise. In essence, the better these problem domains can share information and services, the more value they bring to the business. That's core to our analysis here.

This is not to say that an enterprise-wide SOA is not built by many projects, or that it does not address many problem domains, only that the end goal is to create something that has a common value to the enterprise, and thus the business. 

This architecture in the narrow micro-architecture or micro-domain level SOAs typically has the following attributes:

  • Lack of security.
  • Lack of scalable access to common services.
  • Lack of scalable access to common data.
  • Lack of a common technological approach.
  • Lack of proven scalability.
  • Lack of a common SOA governance strategy.

The fact is that a mere instance of a SOA has a tendency to solve specific tactical problems, and not address the architecture issues of the enterprise. These narrow domains are a great start, but must be combined to form an enterprise-wide solution that addresses the requirements of the business, not just a department or a particular set of business processes.

Moreover, within these micro-domains, security is typically an afterthought, or is implemented only within a particular stovepipe. The lack of a common security strategy around core services and core data can leave the enterprise vulnerable. Thus, a common security infrastructure needs to be a high priority, a security strategy that spans the enterprise. 

In addition to security, there needs to be a common approach to dealing with services and information, typically abstracted from existing information systems such as ERPs, core databases, business intelligence, or core enterprise system APIs. 

While the number and types of interfaces are numerous, there needs to be a mediation layer between those interfaces, no matter how primitive, one that governs how the services and information should be addressed logically within the architecture. In other words, the mediation layer should take highly normalized and unstructured information and bind that information to new abstract schemas that make much more logical sense to the business systems that leverage them, such as a single representation of customer data, order information, or a product.

While providing this common infrastructure has clear advantages, all of this is for naught if the infrastructure won't provide the required scalability. While there may be four systems accessing the same service within a single SOA micro-domain, if that service is accessed by all 150 enterprise systems simultaneously, scalability and reliability becomes more of a concern, and something that more tactical technology is not set up to handle. Indeed, the use of SOA provides a higher degree of reuse and agility, but the sharing of services has to meet Service Level Agreements (SLAs) of the consumers. 

Finally, you need to consider SOA governance in the mix. While leveraging traditional runtime and design-time SOA governance technology is fine for smaller domains, enterprise-wide SOA governance must manage both policies and services at a high transaction level, and not become the bottleneck. 

Understanding these limitations, you need to assign a specific dollar amount, per year, that each of the issues addressed above is costing the business. For example, within a particular enterprise, the cost of the inefficiencies are as shown in Table 1.

Again, these costs are determined through analysis of each issue and compared with the current state of the architecture, also considering if the architecture was "perfectly optimized" and thus each issue eliminated. In other words, how bad are things now versus how good they could be, and what is this costing us? Or, the analysis of the existing architecture and technology solutions, at the micro-domain level, were they to expand to an enterprises level. 

The Value of "To Be"
Considering the point made in the previous section, we need to focus on how to create more holistic and enterprise-wide solutions that are more strategic. The questions are: What is the right approach? What is the right technology? How much money will this save us?

In starting out, you need to consider the core concepts of the approach. In essence you're looking at the architecture as layers. The micro layers deal with the business solution, or the ability to assemble services and information into processes or composite applications to address the company's business needs. Moving down, you have the micro-domain services, or those services that are only specific to a single domain, such as a logistics process built for the shipping department, a service from the mainframe, or perhaps a service externalized by a SaaS provider.

We have many instances of SOA that exist in many domains. They all have services under management, and most have services that have value for the entire enterprise, and thus should be shareable, in a secure and scalable way, within the enterprise. 

Therein is the essence of the approach, or placing these services into a layer of technology underneath these micro-domains that provides for scalable and secure access to both services and information by any number of consumers. In other words, a technology that functions like a network router, making sure that the services are available to those who need them, just like packets within a network. Thus we have an Enterprise Service Router, or ESR. ESRs make micro-domains, or strategic enterprise-wide SOA possible. 

The ESR would be the core layer of the architecture, providing a common services provisioning and execution platform of sorts that's able to meet the needs of all instances of SOA within the enterprise, or micro-domains, no matter what technology or standards were selected to solve the problem, or no matter what legacy technology exists. Basically, this is a common technology-agnostic approach that does not box the SOA architects into a specific set of technologies or standards, and provides a highly scalable and secure access and management component to common services and information. 

There are several advantages to this approach:

  • Cost savings through economies of scale, by leveraging a common services and data management infrastructure
  • Strategic advantage of agility, since the services may be mixed and matched in a highly scalable and secure way, as needed, to address the requirements of the business
  • The ability to leverage the technology you currently have in place, and not force the business into costly and risky application and data migrations
  • Strategic advantage of process and service re-use, a core value of SOA
  • Reduce complexity but provide a common services deployment and access mechanism that's the same throughout the enterprise
  • Ability to leverage existing assets, which does not require changes to existing systems and information

So, using a similar model as we defined earlier, we can express this value as shown in Table 2.

Of course, the value would change year-to-year, and other data points should be considered, but the general idea is the same. Thus, the idea is to understand the cost impact of the "As Is" when compared to an architecture that's approaching optimal ("To Be"), and that's the cost of doing nothing. Moreover, this is the impactful value of the "To Be" architecture, or the cost of implementing a solution that also approaches optimal. I'll bring all of this information together next, in a high-level business case. 

Creating Your Business Case for Inter-Domain SOA Technology
You'll find that the cost of the inefficiencies of "As Is" architecture is roughly the same as the value of the "To Be" architecture, leveraging a common inter-domain technology solution, such as an ESR. We determine the figures twice to understand the value of eliminating problems, and the value when those problems are eliminated. If they don't balance within 30 percent, perhaps you need to recheck your analysis, or the technology solution leveraged. Keep in mind that nothing is ever perfect, and you can approach optimization within enterprise architecture, and not achieve it. You just have to get as close as you can. 

In addition, you need to consider the other variables we discussed earlier, including degree of complexity, which becomes a multiplier of both cost and value. In essence, the more complex your architecture, the more cost it will incur if inefficient, and the more value it will bring if optimized. Typically, this is going to be .50 to 1.5, as complexity multipliers. 

As with complexity, you need to do the same with the degree of reuse, which is the number of services reused within and outside of the micro-domain. Look at this as a percentage, with a services reuse percentage of 15-15 present not uncommon. You would express this as a multiplier as well, typically .95 to 1.05 depending on the number of services reused. Some enterprises reuse very few services, some reuse a great deal. It depends on the needs of the business and the architecture. 

While the analytical issues, as we've already defined, may create a compelling business case unto themselves, the soft issues are typically where the real value is, albeit they are difficult to define. However, there is indeed a huge upside impact in providing a more effective and reactive IT architecture that's able to keep both the customers and the employees happy. The way that you approach this is very dependent upon the business you're in, but the value is huge.

The fact of the matter is that SOA has little value within a small department-level problem domain, or what we called a micro-domain in this article. Thus, in order to obtain the value of SOA, we need to put together a strategy to share both services and information enterprise-wide, or in the macro-domain. This is where your business case comes into play as an important first step.

The best way to approach this is to understand your own business drivers, and put a plan in place to address SOA at both the micro- and macro-domains, which means addressing each domain and the business issues in an incremental way. Moreover, you must also address the information and service sharing that needs to occur within and between the separate domains, and whatever technology that has been deployed there. The ESR should be scalable, secure, and reliable, and be standards and technology agnostic. Just as super highways connect towns, you must connect the various islands of information technology within the enterprise, and thus optimize the value of SOA. 

Resource

More Stories By David Linthicum

Dave Linthicum is Sr. VP at Cloud Technology Partners, and an internationally known cloud computing and SOA expert. He is a sought-after consultant, speaker, and blogger. In his career, Dave has formed or enhanced many of the ideas behind modern distributed computing including EAI, B2B Application Integration, and SOA, approaches and technologies in wide use today. In addition, he is the Editor-in-Chief of SYS-CON's Virtualization Journal.

For the last 10 years, he has focused on the technology and strategies around cloud computing, including working with several cloud computing startups. His industry experience includes tenure as CTO and CEO of several successful software and cloud computing companies, and upper-level management positions in Fortune 500 companies. In addition, he was an associate professor of computer science for eight years, and continues to lecture at major technical colleges and universities, including University of Virginia and Arizona State University. He keynotes at many leading technology conferences, and has several well-read columns and blogs. Linthicum has authored 10 books, including the ground-breaking "Enterprise Application Integration" and "B2B Application Integration." You can reach him at [email protected] Or follow him on Twitter. Or view his profile on LinkedIn.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...