Search This Blog

Wednesday, April 16, 2008

The Spirit of Schematron in Test Driven Development (TDD)

Test Driven Development is a relatively popular methodology nowadays
and I think XML tools can play crucial aspect in better testing. Testing
frameworks are more than capable of using and testing XML based
applications, but just in case you have ever had trouble, here are a
few tips. XSLT makes for an excellent transformation tool for massaging
XML data. This means it also can be a helpful tool to reduce large XML
data sets to something manageable, whether it is XML or not. For example
[see the] simple XSLT stylesheet that will return content on errors
checking an Atom Feed, which is is exceptionally simple, but hopefully
it makes the point. In the example, you'll also notice that the output
was not contained in a XML Element. Sometimes it is easier to just parse
a simple text file line by line, so this might be that situation. Likewise,
having a designated set of test elements could be helpful -- think reports
transformed to HTML). That said, the goal is not to create some enormous
test framework in XML and XSLT. The real goal is to use a great tool for
transforming XML to something you can use easily. I wouldn't necessarily
suggest trying to validate the content of an element or do complex string
parsing. XSLT 1.0 isn't really the easiest language for string parsing
or complex math with out a little help. You can always add your own
extension functions to help out, but hopefully keeping things simple by
massaging the data gets you 80% of the way. The idea here is make things
palatable to your own tastes... I like XML, but I hate XML Schema and
DTDs. RELAX NG is slightly better option, but when you just want to make
sure some value is present, the above methods can be a simpler solution.
The essence of the above suggestions come from Schematron, an excellent
validation tool that is as simple as knowing XPath. Schematron in fact
has been implemented using XSLT, so adding it to your existing test
framework should be relatively simple. There are times when XML seems
to present a subtle problem within the world of object oriented languages.
It's not a hard problem on a technical level. Working with XML is
relatively simple with many examples and resources. Things get hard when
you don't have good tools to help you along the way. The XML landscape
to your programming language of choice when XML has more than enough
tools to seamlessly integrate testing your XML along side your models,
views, controllers and integrations.

Don't Be Surprised By E-Discovery

E-discovery requires government agencies to know what electronic
documents they have and be able to find them quickly if someone requests
them for a court case. That's no small task considering the enormous
volume of electronic documents created by the typical organization.
Email messages and attachments represent a good chunk of the problem,
but word-processing documents, PDFs and other digital information also
contribute to the management challenge. The amended Federal Rules of
Civil Procedure, which has heightened awareness of e-discovery, cover
a wide range of data types under the umbrella of electronically stored
information... E-discovery experts recommend establishing a taxonomy
and creating metadata tags for electronic information. The taxonomy
provides a general way to classify information, and metadata provides
detail on information to make searches more fruitful. The Electronic
Discovery Reference Model project devised an Extensible Markup Language
(XML) schema to consistently describe electronic information. [Penny]
Quirk said EDRM created the XML e-discovery standard to ensure that
consistent and common nomenclature is used for business records during
the e-discovery process; the project is scheduled for completion in this
year's second quarter... Electronic documents culled in e-discovery and
used in litigation demand special treatment: documents compiled in
significant cases at the Justice Department are kept as permanent records
of the government. Records in garden-variety cases in federal court are
considered temporary, but they might still be housed for a number of
years at one of the National Archive's Federal Records Centers. The
National Archives tapped Lockheed Martin in 2005 to build an Electronic
Records Archives system that will help the agency ingest electronic
records flagged for permanent storage; the aim now is to accept
government reco ds in any format, encapsulating each electronic document
in an XML metadata wrapper.

Proposal for IETF NETCONF Data Modeling Language Working Group

The IESG Secretary announced that a new IETF working group has been
proposed in the Operations and Management Area, described in a draft
NETMOD Charter. The NETCONF Working Group has completed a base protocol
to be used for configuration management. However, the NETCONF protocol
does not include a standard content layer. The specifications do not
include a modeling language or accompanying rules that can be used to
model the management information that is to be configured using NETCONF.
This has resulted in inconsistent syntax and interoperability problems.
The purpose of NETMOD is to support the ongoing development of IETF
and vendor-defined data models for NETCONF. The WG will define a
"human-friendly" modeling language defining the semantics of operational
data, configuration data, notifications, and operations. This language
will focus on readability and ease of use. This language must be able
to serve as the normative description of NETCONF data models. The WG
will use YANG as its starting point for this language. Language
abstractions that facilitate model extensibility and reuse have been
identified as a work area and will be considered as a work item or
may be integrated into the YANG document based on WG consensus. The
WG will define a canonical mapping of this language to NETCONF XML
instance documents, the on-the-wire format of YANG-defined XML content.
Only data models defined in YANG will have to adhere to this on-the-wire
format. In order to leverage existing XML tools for validating NETCONF
data in various contexts and also facilitate exchange of data models
SDL data modeling framework (ISO/IEC 19757) with additional annotations
to preserve semantics. The initial YANG mapping rules specifications
are expressly defined for NETCONF modeling. However, there may be
future areas of applicability beyond NETCONF, and the WG must provide
suitable language extensibility mechanisms to allow for such future
work. The NETMOD WG will only address modeling NETCONF devices and the
language extensibility mechanisms... Initial deliverables: (1) An
architecture document explaining the relationship between YANG and
its inputs and outputs; (2) The YANG data modeling language and
semantics; (3) Mapping rules of YANG to XML instance data in NETCONF;
(4) YIN, a semantically equivalent fully reversible mapping to an
XML-based syntax for YANG. YIN is simply the data model in an XML
syntax that can be manipulated using existing XML tools (e.g., XSLT);
(5) Mapping rules of YANG to DSDL data modeling framework (ISO/IEC 19757),
including annotations for DSDL to preserve top-level semantics during
translation; (6) A standard type library for use by YANG. The IESG
has not made any determination as yet; please send your comments to
the IESG mailing list by April 22, 2008.

W3C Invites Public Comment on Content Transformation Guidelines 1.0

W3C announced that the Mobile Web Best Practices Working Group has
published the First Public Working Draft for "Content Transformation
Guidelines 1.0." This document provides guidance to managers of content
transformation proxies and to content providers for how to coordinate
when delivering Web content. Content transformation techniques diverge
widely on the web, with many non-standard HTTP implications, and no
well-understood means either of identifying the presence of such
transforming proxies, nor of controlling their actions. From the point
of view of this document, Content Transformation is the manipulation in
various ways, by proxies, of requests made to and content delivered by
an origin server with a view to making it more suitable for mobile
presentation. The W3C MWI BPWG neither approves nor disapproves of
Content Transformation, but recognizes that is being deployed widely
across mobile data access networks. The deployments are widely divergent
to each other, with many non-standard HTTP implications, and no
well-understood means either of identifying the presence of such
transforming proxies, nor of controlling their actions. This document
establishes a framework to allow that to happen.

Use HATS to Generate Atom Feeds for Mainframe Applications

Nowadays, content distributors deliver all content, including news and
site updates, as feeds. Most enterprise applications use feeds for
various purposes, including to monitor an application and check the
status of a project. Content providers publish a feed link on their site
that users register with a feed reader. The feed reader checks for
updates to the registered feeds at regular intervals. When it detects
an update in the content, the feed reader requests the updated content
from the content provider. The feeds contain only a summary of the content,
but they provide a link to the detailed content. Atom Syndication Format
and RSS are the most common specifications of feeds. We're using Atom
feeds in this article, but you can change easily to RSS feeds with a
little modification. This article leverages a product called IBM
WebSphere Host Access Transformation Services (HATS), which converts
any given green-screen, character-based 3270 or 5250 host application
into a Web application (HTML) or rich-client application. HATS also allows
programmatic interfaces to convert the identified content in these host
applications into any other format. We take a step-by-step approach to
show you how to write a HATS program that converts the host application
content into Atom feeds... Delivering data as Atom feeds in mainframes
opens a new world of possibilities for enterprise applications.
Organizations can use mashup editors to extract data from companies with
external or internal feeds and create new applications or information.
For example, call centers can take advantage of mashups by passing a
calling customer's ZIP code information to Google Maps to identify the
location of the customer. This can help the call center employees
personalize the conversation by enquiring about the weather from the
customer's location, and so on. The delivery of data as Atom feeds in
mainframe servers is one of the fundamental building blocks that enables
an organization to embrace Web 2.0.

Apache Abdera: Atom, AtomPub, and Java

The Apache Abdera project, an open source Atom Syndication and Atom
Publication Protocol implementation currently still in its incubation
phase, has recently reached its 0.40 milestone, an important step towards
graduation [as an Apache project]. Snell: "While Atom and AtomPub
certainly began life as a way of syndicating and publishing Weblog
content, it has proven useful for a much broader range of applications.
I've seen Atom being used for contacts, calendaring, file management,
discussion forums, profiles, bookmarks, wikis, photo sharing, podcasting,
distribution of Common Alerting Protocol alerts, and many other cases.
Atom is relevant to any application that involves publishing and managing
collections of content of any type... Abdera is an open source
implementation of the Atom Syndication Format and Atom Publishing Protocol.
It began life as a project within IBM's WebAhead group and was donated to
the Apache Incubator in June 2006. Since then, it has evolved into the
most comprehensive open-source, Java-based implementation of the Atom
standards.. Abdera has been part of the Apache Incubator for long enough.
While there are still some details to work out, I would very much like
to see Abdera graduate to its own Top Level Project at Apache, and become
host to a broad range of Atom-based applications." Diephouse: "Look to
some of the public services out there: most of the APIs for Google are
based on AtomPub. Microsoft is moving toward it for web APIs too. These
services are all going beyond just blogs. AtomPub goes beyond public web
APIs as well -- I've noticed that many enterprises are starting to use
AtomPub for some of their internal services as well. Both AtomPub and
SOAP/WSDL give you a way to build a service for others to use. But AtomPub
takes a fundamentally different approach to helping users implement
services. It implements constraints which give new types of freedom.
Because the data format is constrained -- every entry has a title, entry,
id, and content/summary -- I can use an Atom feed from any type of
application and get some useful information out of it... Abdera includes
support for developing/consuming AtomPub services, an IRI library, a URI
template library, unicode normalization, extensions for things like XML
signature/encryption, GData, GeoRSS, OAuth, JSON and more. One of the
cool new things in the latest release are a set of 'adapters' which allow
you to have an AtomPub service without any coding by storing entries in
JDBC, JCR or the filesystem...

Sunday, April 13, 2008

Who Trumps bin Laden as a Cyberthreat? Look in the Mirror.

From the San Francisco RSA 2008 Conference: "It turns out al-Qaida's
leader and his cohorts aren't the biggest threat to our cybersecurity.
You are... Security gurus have long urged the business world to turn
network security into part of the corporate DNA. The message is not
fully getting through. And now we're seeing the predictable results.
In years past, [Symantec CEO John] Thompson and other computer security
executives have pushed the idea of making cyber-security as familiar
to most people as the fire prevention campaign underwritten by the
government in the 1960s and 1970s. Considering the amount of money
Uncle Sam is spending on cyber-security these days, that's a pipedream.
Department of Homeland Security Secretary Michael Chertoff, who also
presented a keynote on Tuesday, offered litte indication Washington
was about to ride to the rescue. In remarks during his prepared speech
and subsequent press conference, Chertoff offered a dutiful recitation
of what he described as the President's interest in shoring up the
nation's digital security. Give Chertoff credit for being candid about
where DHS has come up short. He said the government needs to reduce
its (literally) thousands of network access points to around 50. At
the same time, Chertoff wants his department to faster detect and
analyze computer anomalies. A big part of that will involve a revamp
of U.S. CERT's early warning system... In the end, however, money
talks and you-know-what walks. The feds only have a $115 million budget
to work with. Chertoff's department has requested $192 million for
the new fiscal year but that's still doing it on the cheap. By
comparison, we spend $720 million in Iraq each day [actually their own money, joke of the day,].

More Information

SOA Software's SOLA Celebrates 5 Years

SOA Software, a leading mainframe web services vendor, today announced
that SOLA, its flagship mainframe SOA product, has reached the five
year mark in running reliably extremely high volume production
environments. During this period SOLA has not been responsible for a
single production outage, despite handling tens of millions of
transactions every day. SOLA runs the world's largest mainframe SOA
implementations. A number of SOLA customers use it to run many millions
of mainframe web services transactions per day, and many customers'
plans anticipate volume in the 20-30 million transactions per day range.
Because SOLA offers a complete SOA solution there is no requirement to
integrate multiple products when building an enterprise-class SOA
incorporating the mainframe. SOLA includes a drag-and-drop graphical
development studio, an integrated UDDI registry, WS-Security, WS-Policy,
monitoring, logging, a management console and dashboard, SLA management,
BPEL, SAML, X509 Certificates, LDAP and Active Directory. SOLA eliminates
the complexity and expense of combining multiple products, such as CICS
TS 3.x, WebSphere and RAD... SOLA is the only mainframe SOA product to
offer closed-loop Governance automation. A service is automatically
governed from the point of creation because it inherits a security policy.
Policy, by means of WS-PolicyAttachment, is associated with the service
though all phases of the Software Development Lifecycle. It is not
possible to create or run an ungoverned service. Other features of SOLA
include integration with enterprise change management, Global Dictionary,
Logging, Auditing, Outbound SOAP requests, Batch support, Integration
with external UDDI, version control, support for the Software Development
Lifecycle, WSDL first and integration with SOA Management tools, making
SOLA the only secure, standards-based, and Governable product in the
space. SOLA also offers XACML for authentication and a comprehensive
identity mapping system that allows for the mapping of any credential
(LDAP, etc) to a mainframe RACF ID.

OOXML Triggers Demonstration in Norway

"People were demonstrating today in Oslo in front of the ISO SC34
meeting against the adoption of Microsoft OOXML as an ISO standard, and
especially against the behaviour of Standards Norway, who voted Yes to
the specification, despite a lack of support by a majority of the
technical committee. Geir Isene is reporting about the demonstration...
We are not here today in order to bash Microsoft. We are here because we
believe in open standards. We are not even here today because we are
opposed to OOXML. We are here because we are opposed to OOXML as an ISO
standard. We are not here because we want to discredit the ISO. We are
here because we want to defend ISO's integrity. We are here because we
want to draw attention to the scandalous behaviour of the people in
Standard Norway whose job it is to represent Norwegian users and software
vendors. And we are here because we want to prevent the adoption of a
damaging IT standard in Norway... It's never over until the fat lady
sings, and this fat lady only just got started...

Public Review Draft for WebCGM Version 2.1

Members of the OASIS CGM Open WebCGM Technical Committee have released
"WebCGM Version 2.1" as a Committee Draft for public review. The comment
period ends June 01, 2008. Computer Graphics Metafile (CGM) is an ISO
standard, defined by ISO/IEC 8632:1999, for the interchange of 2D vector
and mixed vector/raster graphics. WebCGM is a profile of CGM, which adds
Web linking and is optimized for Web applications in technical
illustration, electronic documentation, geophysical data visualization,
and similar fields. First published (1.0) in 1999, WebCGM unifies
potentially diverse approaches to CGM utilization in Web document
applications. It therefore represents a significant interoperability
agreement amongst major users and implementers of the ISO CGM standard.
The present version, WebCGM 2.1, refines and completes the features of
the major WebCGM 2.0 release. WebCGM 2.0 added a DOM (API) specification
for programmatic access to WebCGM objects, a specification of an XML
Companion File (XCF) architecture, and extended the graphical and
intelligent content of WebCGM 1.0. The content of the WebCGM 2.1 profile
comprises less than a dozen items that were arguably within the scope
of WebCGM 2.0, but which arose too late in the standardization of the
latter. On 30-January-2007, OASIS and W3C simultaneously published
WebCGM 2.0 as both an OASIS Standard and a W3C Recommendation, which
are identical in all technical aspects, and differ only in the format
and presentation styles of the respective organizations.

Google's OpenID Provider Via Google Web Engine

"Shortly after Google released Google Web Engine last night, Ryan
Barrett of Google released an application for the platform that
essentially makes Google an OpenID Provider. Check it out here [...]
You can use your Google Account to log into any site that supports
OpenID! Ryan wrote: "If you've talked to me about work during the last
couple years, I've probably downplayed it, resorted to generalities,
or just changed the subject. No longer! We've finally taken the wraps
off our project, Google App Engine. From the docs: 'Google App Engine
lets you run your web applications on Google's infrastructure. App
Engine applications are easy to build, easy to maintain, and easy to
scale as your traffic and data storage needs grow. With App Engine,
there are no servers to maintain: You just upload your application,
and it's ready to serve your users.' Personally, I spent most of my
time writing the datastore, both the backend and much of the Python API.
When I found extra time, though, I had a lot of fun writing apps and
libraries on top of App Engine. I particularly enjoyed writing an
interactive shell, an OpenID provider, and a full text search library.
From the OpenID Wiki: OpenID allows anyone who can run a web server to
run an identity server. Your identity server is separate from your
identity, so you are free to use any identity server that has some
ability to validate your identity and you can change between them at
will. An identity server is sometimes referred to as an identity provider.
If you wish, you can use the services listed below with your own website
as your identifier using delegation.

OGC Adopts ebRIM Application Profile for Catalogues

The Open Geospatial Consortium announced that its membership has
approved the OASIS ebRIM (Electronic Business Registry Information
Model) application profile of the OpenGIS Catalogue Service 2.1.2
standard. The Catalogue Standard specifies a design pattern that
allows for the definition of interfaces called application profiles
based on different standards, such as ZF39.50, ebRIM, UDDI, or ISO
metadata, that support the ability to publish and search collections
of descriptive information (metadata) about geospatial data, services
and related information objects. The ebRIM application profile was
developed and adopted because it enables catalogs to handle services
as well a variety of other geospatial resource types such as symbol
libraries, coordinate reference systems, application profiles, and
application schemas and geospatial metadata. The OGC is an international
industry consortium of more than 345 companies, government agencies,
research organizations, and universities participating in a consensus
process to develop publicly available interface specifications.
OpenGIS Specifications support interoperable solutions that geo-enable
the Web, wireless and location-based services, and mainstream IT. The
specifications empower technology developers to make complex spatial
information and services accessible and useful with all kinds of
applications.

Building an Entitlements Management Solution

What does it take to build an Entitlements Management solution? That
depends on who you ask of course. However, when I look at commercial
products in this area I see certain common architectural patterns.
Many of the products that I've seen make use of a set of common elements
defined by the OASIS XACML standard (Extensible Access Control Markup
Language). The [referenced] picture shows the typical components of an
Entitlements Management solution. The XACML spec defines the role of
the Policy Administration Point (PAP), the Policy Decision Point (PDP),
the Policy Enforcement Point (PEP), and the Policy Information Points
(PIP). The Policy Administration Point (PAP) manages the creation and
storage of policy data in the Policy Store. The administrator interacts
with the PAP (typically) through a browser based management console
where roles, policies, resources, actions and so forth are defined and
managed. The policy store may be an LDAP directory or a database. The
PAP may also provide facilities for policy import and export. Most
products provide some management APIs that allow customers to embed
administrative functionality into their own applications. Runtime role
or authorization decisions are determine at the Policy Decision Points.
Typically I've seen two ways that PDPs are deployed: (1) As a
centralized entitlements server that can be invoked by remote clients
via RMI, Web Service calls or using the XACML 2.0 request/response
protocol. (2) As an embedded PDP deployed in same process space as
the application. The most common examples are PDPs embedded in a JVM
for plain Java applications or embedded in an application server for
J2EE applications... The PDPs can be configured to get data from one
or more Policy Information Points (PIPs). These PIPs can be user or
application directories or databases that contain information that
is required to make an access decision. Such information includes
user, group, and resource attributes (e.g. user profile information,
account balances and limits, etc.). These attributes can then be
used in the policies which control access...

Mathematical Markup Language (MathML) Version 3.0 Draft Published

W3C's Math Working Group has published a Working Draft of "Mathematical
Markup Language (MathML) Version 3.0." This is the third draft of
MathML, an XML application for describing mathematical notation and
capturing both its structure and content. The specification defines the
Mathematical Markup Language, or MathML, as an XML application for
describing mathematical notation and capturing both its structure and
content. The goal of MathML is to enable mathematics to be served,
received, and processed on the World Wide Web, just as HTML has enabled
this functionality for text. This specification of the markup language
MathML is intended primarily for a readership consisting of those who
will be developing or implementing renderers or editors using it, or
software that will communicate using MathML as a protocol for input or
output. It is not a User's Guide but rather a reference document. MathML
can be used to encode both mathematical notation and mathematical
content. About thirty-five of the MathML tags describe abstract
notational structures, while another about one hundred and seventy
provide a way of unambiguously specifying the intended meaning of an
expression. Additional chapters discuss how the MathML content and
presentation elements interact, and how MathML renderers might be
implemented and should interact with browsers. Finally, this document
addresses the issue of special characters used for mathematics, their
handling in MathML, their presence in Unicode, and their relation to
fonts. While MathML is human-readable, in all but the simplest cases,
authors use equation editors, conversion programs, and other specialized
software tools to generate MathML. Several versions of such MathML
tools exist, and more, both freely available software and commercial
products, are under development.

Wednesday, April 9, 2008

RSA 2008: BT Trials Federated Identity Management

BT is experimenting with a federated identity management system that
could be rollled out to its eight million internet users and corporate
customers. A commercial version would allow users to identify themselves
for websites and applications and other users to access data, do work
and transact business, said Robert Temple, BT's chief security architect.
Using CA's Siteminder software, BT is giving internal staff web access
to applications such as Peoplesoft, Siebel, Oracle Financials, Citrix,
an XML gateway, and a voice-verification system from Persay. Temple said
the company's intention is to provide managed user identity as a "common
capability" of the kind relatively common in IT but rare in
telecommunications. Temple said BT runs 32 discrete different networks.
As a result it has too many Radius identity authentication servers.
Learning how to consolidate how it manages user identities on all these
networks is the only way it would be possible to extend similar
safeguards to BT customers, he said. It has opted to use the Liberty
Alliance's Security Assertion Markup Language (SAML) 2.0 standard for
federated identity management. However, it has proved hard to find
external contractors willing and able to help BT as most were familiar
with earlier versions of SAML. Temple noted that relationships between
BT and organisations sharing its federated IDs were plagued by lawyers
and contracts. "In the end, we asked the lawyers politely to get out of
the way as we knew what we were doing," he said. Temple said this was
not to minimise the legal issues, which required partners to spend a
lot of time building trust in each other. These lessons would help to
reduce the learning curve for user organisations when the time came for
them to make more use of the web for business applications...

SCA Java EE Integration Specification Version 0.9

On March 28, 2008 Version 0.9 of the SCA "Java EE Integration
Specification" was published by OSOA authors as part of the SCA
Service Component Architecture; contributors include BEA, Cape Clear,
IBM, Interface21, IONA, Oracle, Primeton, Progress Software, Red Hat,
Rogue Wave, SAP, Siemens, Software AG., Sun, Sybase, and TIBCO. The
specification defines a model of using SCA assembly in the context of
a Java EE runtime that enables integration with Java EE technologies
on a fine-grained component level as well as use of Java EE applications
and modules in a coarse-grained large system approach. The Java EE
specifications define various programming models that result in
application components, such as Enterprise Java Beans (EJB) and Web
applications that are packaged in modules and that are assembled to
enterprise applications using a Java Naming and Directory Interface
(JNDI) based system of component level references and component naming.
Names of Java EE components are scoped to the application package
(including single module application packages), while references, such
as EJB references and resource references, are scoped to the component
and bound in the Environment Naming Context (ENC). In order to reflect
and extend this model with SCA assembly, this specification introduces
the concept of the Application Composite and a number of implementation
types, such as the EJB implementation type and the Web implementation
type, that represent the most common Java EE component types.
Implementation types for Java EE components associate those component
implementations with SCA service components and their configuration,
consisting of SCA wiring and component properties as well as an assembly
scope (i.e. a composite). Note that the use of these implementation
types does not create new component instances as far as Java EE is
concerned. Section 3.1 explains this in more detail. In terms of
packaging and deployment this specification supports the use of a Java
EE application package as an SCA contribution, adding SCA's domain
metaphor to regular Java EE packaging and deployment. In addition, the
JEE implementation type provides a means for larger scale assembly of
contributions in which a Java EE application forms an integrated part
of a larger assembly context and where it is viewed as an implementation
artifact that may be deployed several times with different component
configurations.

Intel Releases SOA Security Toolkit

Intel has introduced its SOA Security Toolkit as a release candidate.
Part of Intel's family of XML tools, the toolkit is a high-performance
software module that addresses the confidentiality needs of
services-oriented architectures (SOA) by providing XML digital
signatures, encryption, and decryption capabilities for SOAP protocol
messages. Enterprises adopting and deploying Service Oriented
Architecture (SOA) solutions rely on message formats defined in XML
(Extensible Markup Language). The extensibility, verbosity and
structured nature of XML create performance challenges for software
developers seeking to provide content security in this dynamic,
heterogeneous environment. The Intel SOA Security Toolkit is standards
compliant, for easy integration into existing XML processing environments
and is optimized to support the authentication, confidentiality and
integrity of complex and large-size XML documents. The Intel SOA Security
Toolkit 1.0 for Java environments is a high-performance policy-driven
API available for Linux and Windows. Compliant with WS-security 1.0/1.1
and SOAP 1.1/1.2 standards, the toolkit focuses on confidentiality,
integrity and non-repudiation for SOA environments. This toolkit enables
encryption and decryption of SOAP message data, digital signature and
verification via a wide range of security algorithms, using industry
standards, for both servers as well as application environments. The
toolkit lets users provide their own XML policy file as an input. Through
this policy file, users can specify for the API security policy engine
which key provider and trust manager to instantiate, using either a
custom or the default class loader implementation. The security policy
engine then applies the specified policy, obtaining the keys and
certificates through the specified key provider and perform the trust
check using the specified trust manager. The toolkit supports all types
of X509 certificates, private, and shared keys.

Cool URIs for the Semantic Web

Members of the W3C Semantic Web Education and Outreach (SWEO) Interest
Group have published an Interest Group Note "Cool URIs for the Semantic
Web." It constitutes a tutorial explaining decisions of the Technical
Architecture Group (TAG) for newcomers to Semantic Web technologies. The
document was initially based on the DFKI Technical Memo TM-07-01, 'Cool
URIs for the Semantic Web' and was subsequently published as a W3C
Working draft in December 2007, and again in March 2008 by the Semantic
Web Education and Outreach (SWEO) Interest Group of the W3C, part of the
W3C Semantic Web Activity. The drafts were publicly reviewed, especially
by the TAG and the Semantic Web Deployment Group (SWD). Summary: The
Resource Description Framework RDF allows users to describe both Web
documents and concepts from the real world -- people, organisations,
topics, things -- in a computer-processable way. Publishing such
descriptions on the Web creates the Semantic Web. URIs (Uniform Resource
Identifiers) are very important, providing both the core of the framework
itself and the link between RDF and the Web. This document presents
guidelines for their effective use. It discusses two strategies, called
303 URIs and hash URIs. It gives pointers to several Web sites that use
these solutions, and briefly discusses why several other proposals have
problems. Given only a URI, machines and people should be able to retrieve
a description about the resource identified by the URI from the Web. Such
a look-up mechanism is important to establish shared understanding of
what a URI identifies. Machines should get RDF data and humans should get
a readable representation, such as HTML. The standard Web transfer protocol,
HTTP, should be used. There should be no confusion between identifiers
for Web documents and identifiers for other resources. URIs are meant
to identify only one of them, so one URI can't stand for both a Web
document and a real-world object.

Google App Engine Supports Scalable Application Development

Google has announced the availability of its free Google App Engine which
provides a fully-integrated application environment, making it "easy
to build scalable applications that grow from one user to millions of
users without infrastructure headaches." According to the Google
announcement, "Google App Engine gives you access to the same building
blocks that Google uses for its own applications, making it easier to
build an application that runs reliably, even under heavy load and with
large amounts of data. The development environment includes the following
features: (1) Dynamic webserving, with full support of common web
technologies; (2) Persistent storage powered by Bigtable and GFS [Google
File System, a scalable distributed file system for large distributed
data-intensive applications] with queries, sorting, and transactions;
(3) Automatic scaling and load balancing; (4) Google APIs for
authenticating users and sending email; (5) Fully featured local
development environment. App Engine applications are implemented using
the Python programming language. The App Engine Python runtime environment
includes a specialized version of the Python interpreter, the standard
Python library, libraries and APIs for App Engine, and a standard
interface to the web server layer. Google App Engine and Django both
have the ability to use the WSGI standard to run applications. As a result,
it is possible to use nearly the entire Django stack on Google App Engine,
including middleware. As a developer, the only necessary adjustment is
modifying your Django data models to make use of the Google App Engine
Datastore API to interface with the fast, scalable Google App Engine
datastore. Since both Django and Google App Engine have a similar concept
of models, as a Django developer, you can quickly adjust your application
to use our datastore. Google App Engine packages these building blocks
and takes care of the infrastructure stack, leaving you more time to
focus on writing code and improving your application... This preview of
Google App Engine is available for the first 10,000 developers who sign
up, and we plan to increase that number in near future. During this
preview period, applications are limited to 500MB of storage, 200M
megacycles of CPU per day, and 10GB bandwidth per day. We expect most
applications will be able to serve around 5 million pageviews per month.
In the future, these limited quotas will remain free, and developers will
be able to purchase additional resources as needed..."

New WSO2 Identity Solution Feature-Rich with OpenID

Developers today announced the "WSO2 Identity Solution", which enables
LAMP and Java websites to provide strong authentication based on the
new interoperable Microsoft CardSpace technology. New features in
version 1.5 include: (1) OpenID Provider and relying party component
support; (2) OpenID information cards based on user name-token credential
and self issued credential; (3) SAML 2.0 support. "This new release
includes OpenID and OpenID Information Cards, further enhancing the WSO2
Identity Solution to cater to a wider audience for web based
authentication. OpenID is a key feature in decentralizing single sign-on,
much favored by many users. The WSO2 Identity Solution is built on the
open standards Security Assertion Mark-up Language (SAML) and WS-Trust.
This version supports SAML version 2.0 in addition to 1.1 which was
available in the previous version of the WSO2 Identity Solution. WSO2's
open source security offering features an easy-to-use Identity Provider
that is controlled by a simple Web-based management console and supports
interoperability with multiple vendors' CardSpace components. This
includes those provided by Microsoft .NET. The WSO2 Identity Solution
also works with current enterprise identity directories, such as those
based on the Lightweight Directory Access Protocol (LDAP) and Microsoft
Active Directory, allowing them to leverage their existing infrastructure.
In addition to the Identity Provider the WSO2 Identity Solution provides
a Relying Party Component Set which plugs into the most common Web
servers to add support for CardSpace authentication and now OpenID."
The software is available for download, governed by the open source
Apache License, Version 2.0.

First Public Draft: Health Care and Life Sciences (HCLS) Knowledgebase

Members of the W3C Semantic Web in Health Care and Life Sciences
Interest Group (HCLS) have released a First Working Draft for a "HCLS
Knowledgebase" specification. This document is one of two initial WDs.
The HCLS Knowledgebase (HCLS-KB) is a biomedical knowledge base that
integrates 15 distinct data sources using currently available Semantic
Web Technologies such as the W3C standard Web Ontology Language (OWL)
and Resource Description Framework (RDF). This report outlines which
resources were integrated, how the KB was constructed using freely
available triple store technology, how it can be queried using the W3C
Recommended RDF query language SPARQL, and what resources and inferences
are involved in answering complex queries. While the utility of the KB
is illustrated by identifying a set of genes involved in Alzheimer's
Disease, the approach described here can be applied to any use case
that integrates data from multiple domains. A second document
"Experiences with the Conversion of SenseLab databases to RDF/OWL"
shares implementation experience of the Yale Center for Medical
Informatics: "One of the challenges facing Semantic Web for Health
Care and Life Sciences is that of converting relational databases
into Semantic Web format. The issues and the steps involved in such
a conversion have not been well documented. To this end, we have
created this document to describe the process of converting SenseLab
databases into OWL. SenseLab is a collection of relational (Oracle)
databases for neuroscientific research. The conversion of these
databases into RDF/OWL format is an important step towards realizing
the benefits of Semantic Web in integrative neuroscience research.
This document describes how we represented some of the SenseLab
databases in Resource Description Framework (RDF) and Web Ontology
Language (OWL), and discusses the advantages and disadvantages of
these representations. Our OWL representation is based on the reuse
of existing standard OWL ontologies developed in the biomedical
ontology communities." The mission of the W3C Health Care and Life
Sciences (HCLS) Interest Group is to show how to use Semantic Web
technology to answer cross-disciplinary questions in life science that
have, until now, been prohibitively difficult to research. The
success of the group continues to draw industry interest. W3C Members
are currently reviewing a draft charter that would enable the renewed
HCLS Interest Group to develop and support use cases that have clear
scientific, business and/or technical value, using Semantic Web
technologies in three areas: life science, translational medicine,
and health care. W3C invites Members to review the draft charter
(which is public during the review), and encourages those who are
interested in using the Semantic Web to solve knowledge representation
and integration on a large scale to join the Interest Group.

DMTF SM CLP Specification Adopted as an ANSI INCITS Standard

The Distributed Management Task Force announced a major technology
milestone in achieving "National Recognition with a Newly Approved ANSI
Standard." Its Server Management Command Line Protocol (SM CLP)
specification, a key component of DMTF's Systems Management Architecture
for Server Hardware (SMASH) initiative, has been approved as an American
National Standards Institute (ANSI) InterNational Committee for
Information Technology Standards (INCITS) standard. DMTF will continue
to work with INCITS to submit the new ANSI standard to the International
Standards Organization/ International Electrotechnical Commission
(ISO/IEC) Joint Technical Committee 1 (JTC 1) for approval as an
international standard. The INCITS Executive Board recently approved the
SM CLP standard, which has been designated ANSI INCITS 438-2008. INCITS
is accredited by ANSI, the organization that oversees the development of
American National Standards by accrediting the procedures of
standards-developing organizations, such as INCITS. SM CLP (DSP0214) is
a part of DMTF's SMASH initiative, which is a suite of specifications
that deliver architectural semantics, industry standard protocols and
profiles to unify the management of the data center. The SM CLP standard
was driven by a market requirement for a common command language to
manage a heterogeneous server environment. Platform vendors provide tools
and commands in order to perform systems management on their servers.
SM CLP unifies management of multi-vendor servers by providing a common
command language for key server management tasks. The spec also enables
common scripting and automation using a variety of tools. The SM CLP spec
allows management solution vendors to deliver many benefits to IT
customers. The spec enables data center administrators to securely manage
their heterogeneous server environments using a command line protocol
and a common set of commands. SM CLP also enables the development of
common scripts to increase data center automation, which can help
significantly reduce management costs... The CLP is defined as a
character-based message protocol and not as an interface, in a fashion
similar to Simple Mail Transfer Protocol (RFC 2821). The CLP is a
command/response protocol, which means that a text command message is
transmitted from the Client over the transport protocol to the
Manageability Access Point (MAP). The MAP receives the command and
processes it. A text response message is then transmitted from the MAP
back to the Client... The CLP supports generating XML output data
(Extensible Markup Language, Third edition), as well as keyword mode
and modes for plain text output. XML was chosen as a supported output
format due to its acceptance in the industry, establishment as a
standard, and the need for Clients to import data obtained through the
CLP into other applications.

Tuesday, April 8, 2008

XML and Government Schizophrenia

The U.S. Government is very leery of technology fads and that is why
it often has a love/hate relationship with XML. For every technology
that exists, the government has a huge legacy investment. So, while the
corporate world may turn on a dime and quickly adopt the latest and
greatest thing -- the government must contend with huge legacy issues,
a two-year (minimum) budget planning cycle, and a horde of technologists
actively engaged and personally invested in that legacy technology that
you want to throw away! [...] Let me briefly discuss a program that I
initiated when working for the Department of Homeland Security (DHS).
The National Information Exchange Model (NIEM) started as a joint-venture
between DHS and the Department of Justice (DOJ) to harmonize and speed
up the process of information sharing between the federal government
and state and local governments -- actually State, Local and Tribal
governments. The basic idea is that it combines a registry of standard
data objects (modeled via XML Schema), a process for quickly producing
an exchange message, a governance process for the model, and robust tool
support. The model leveraged and extended an existing model called the
Global Justice XML Data Model (GJXDM). It is widely used by law
enforcement at all levels of government and now is also being widely
used at DHS. It has multiple success stories behind it including the
Amber Alert and the national sex offender registry. I highly encourage
everyone to look at it and help make it better. So, what does this mean
for Government Schizophrenia? For information sharing, XML is a favorite
but is attacked continuously in relation to weak data modeling support,
weak encoding of binary objects, performance issues, and many more...

Web Oriented Architecture (WOA) May Soon Eclipse SOA

A recent blog post questions whether services oriented architecture
(SOA) was driving substantive transformation inside of enterprise IT.
My conclusion is that something is not quite right in SOA-ville. The
uptake of general-purpose service enablement is by no means a hockey
stick trend line. The adoption patterns some five years into the SOA
evolutionary path do not show a slam dunk demand effect. The role,
impact and importance of SOA is, in fact, ambiguous -- still. Many
see it as merely an offshoot of EAI, rather than a full-blown paradigm
shift. Meanwhile, some other trends that do demonstrate more of a
hockey stick adoption pattern -- social media, Ruby/Phython, RESTful
interactions, and RIAs -- are worth a fresh look in the context of SOA.
The new kids on the innovation block are experimenting at break-neck
speed with social media, social networking, Ruby on Rails, SaaS, Python,
REST and the vital mix of rich Internet application (RIA) approaches.
Something is going on here that shows the compelling attraction of
better collaboration and sharing methods, of self-defining social and
work teams, of faster and easier applications development, of not
moving old systems to the Web but just moving to the Web directly, and
the recognition that off-the-wire applications with fine UIs are the
future... I'm wondering now whether the window for holistic SOA
deployment and value, as it has been classically defined, is being
eclipsed. Is it possible that Web interfaces and data disintermediation
for legacy applications will be enough? Is it possible that exposing
the old applications, and reducing costs of IT support via consolidation
and modernization is enough? In short, is the path of least resistance
to business transformation one that necessarily requires a fording of
the SOA stream? Or is there a shorter, dry path that goes directly to
Web oriented architecture? Is SOA therefore the impediment or empowerment
to transformation on the right scale and at Internet time?

SaaS Single Sign-On: It's Time for a Lighter Approach

SaaS brings a lot of advantages to businesses - no need to invest in
purchasing and maintaining licenses and infrastructure, and no need
to worry about upgrades and bug fixes. Larger companies, however, face
a major challenge related to user authentication and management. Larger
companies have invested a lot of time and effort in improving user
productivity, compliance and security, and in cutting user management
costs. They have done so using technologies like single sign-on and
centralized user management. SaaS applications are now challenging
those efforts and threatening to bring them back to the situation
where every user has several different usernames and passwords and
the customers have several different user directories to maintain.
Currently there are a few common ways for SaaS providers to give users
single sign-on and/or to let customers use their internal user management
solutions to manage access to the SaaS application: (1) Identity
federation; (2) Delegated authentication; (3) Encrypted links; (4)
User directory synchronization. Identity federation, as a concept,
is exactly what is needed -- SaaS providers can offer customers single
sign-on and automated user management based on current information in
their internal user directory. Identity federation based on SAML,
WS-Federation or ADFS, however, requires each customer to invest in
and roll out software compliant with those technologies... Delegated
authentication provides users single sign-on by using an existing
logon, for instance on a corporate intranet, to generate tokens that
can be used to grant access to a SaaS application. However, delegated
authentication does not bring any help to maintenance of user profiles
and access rights, which still have to be maintained manually in the
application. It also requires time and technical resources by the
customer... Google Analytics, the SaaS application for monitoring web
site usage, offers a different and interesting view to the problem.
Each Analytics customer needs to integrate Analytics with its web site
in order to be able to collect and monitor usage statistics. By
choosing a scripting integration model requiring only a few lines of
JavaScript on the web pages, Google managed to lower the requirements
on the customers' web sites and the technical skills required to do
the integration. As a result, they managed to get hundreds of thousands
of customers in 18 months...

RSA Conference 2008: Concordia Done, OSIS To Go

The author blogs on the the Project Concordia workshop held at RSA 2008
on 2008-04-07, showing SAML 2.0/WS-Federation single sign-on from a
service provider to an identity provider, the identity provider
authenticating the user via a managed information card and sending
claims from the card to the service provider as SAML 2.0 attributes.
Note that not every combination of SAML 2.0/WS-Federation SP, IdP and
Information Card STS completely works, but enough that the approach was
proven. Slides from the "Concordia/RSA Interop Demo" describe the
products involved. OpenSSO primarily attracts enterprises interested in
deploying a web access management or federation solution using open
source tools. An Information Card RP Extension has been contributed
by Patrick Petit. The OAIS (Open Source Identity Systems) demonstration
shows the OSIS User centric identity network interoperability between
identity providers, card selectors, browsers and websites demonstrates
how users can 'click-in' to sites via self-issued and managed
information cards, or i-cards. Open ID, Higgins Identity Framework,
Microsoft CardSpace, SAML, WSTrust, Kerberos and X.509 components
interoperate within an identity layer from open-source parts...

Concordia Project Demonstrates Multi-Protocol Interoperability

The Concordia Project, a global cross-industry initiative formed by
members of the identity community to drive harmonization and
interoperability among identity initiatives and protocols, announced
its first interoperability event taking place at RSA Conference 2008
in San Francisco on Monday, April 7 from 9:00am - 12:30pm. The event
will include FuGen Solutions, Internet2, Microsoft, Oracle, Ping
Identity, Sun Microsystems and Symlabs demonstrating varying
interoperability scenarios using Information Card, Liberty Alliance,
and WS-* identity protocols. Over 500 RSA Conference participants have
registered to attend the Concordia Project interoperability event to
date. The April 7 demonstrations have been developed to meet use case
scenarios presented to the Concordia Project by enterprise, education
and government organizations deploying digital identity management
systems and requiring multi-protocol interoperability of identity
specifications. Since the formal launch of the Concordia Project in
June of 2007, deployer use case scenarios involving Information Card,
Liberty Alliance and WS-* identity protocols have been presented by
AOL, the Government of British Columbia, Boeing, Chevron, General
Motors, Internet2, theNew Zealand State Services Commission, the US
GSA and the University of Washington. Concordia members decided
collectively on what interoperability demonstrations should be developed
first based on identity management commonalities and priorities
identified by the majority of deploying organizations. During the RSA
Conference event, Concordia members will demonstrate multi-protocol
interoperability based on two of the fourteen use case scenarios
submitted to the project to date. The first includes Oracle, Internet2,
FuGen Solutions, Microsoft, Ping Identity, Sun Microsystems and Symlabs
and is characterized by a user authenticating to an identity provider
(IdP) using an InfoCard and communicating that authentication to a
relying party through either SAML 2.0 or WS-Federation protocols. The
second includes Internet2, Oracle, Sun Microsystems and Symlabs
demonstrating SSO flow between chained SAML and WS-Federation protocols.

XACML Interoperability Demo for Health Care Scenario

At the RSA 2008 Conference, members of the OASIS open standards
consortium, in cooperation with the Health Information Technologies
Standards Panel (HITSP), demonstrated interoperability of the
Extensible Access Control Markup Language (XACML) version 2.0.
Simulating a real world scenario provided by the U.S. Department of
Veterans Affairs, the demo showed how XACML ensures successful
authorization decision requests and the exchange of authorization
policies. The XACML Interop at the RSA 2008 conference utilizes
requirements from Health Level Seven (HL7), ASTM International, and
the American National Standards Institute (ANSI). The demo features
role-based access control (RBAC), privacy protections, structured
and functional roles, consent codes, emergency overrides and filtering
of sensitive data. Vendors show how XACML obligations can provide
capabilities in the policy decision making process. The use of XACML
obligations and identity providers using the Security Assertion
Markup Language (SAML) are also highlighted. According to the
ANSI/HITSP announcement, the multi-vendor demonstrations "highlight
the use of OASIS standards in HITSP-approved guidelines, known as
'constructs,' to meet healthcare security and privacy needs. The
Panel's security and privacy specifications address common data
protection issues in a broad range of subject areas, including
electronic delivery of lab results to a clinician, medication workflow
for providers and patients, quality, and consumer empowerment. HITSP
is a multi-stakeholder coordinating body designed to provide the
process within which affected parties can identify, select, and
harmonize standards for communicating health care information throughout
the health care spectrum. As mandated by the U.S. Department of Health
and Human Services (HHS), the Panel's work supports Use Cases defined
by the American Heath Information Community (AHIC). 'This is the first
time the RSA Conference will highlight in an Interop demo the healthcare
scenario, the Electronic Health Records (EHR), and associated
interoperable terminologies of clinical roles, patient consent
directives, obligations, and business logic,' said John (Mike) Davis,
standards architect with the VHA Office of Information in the Department
of Veterans Affairs, and a member of the HITSP Security, Privacy and
Infrastructure Technical Committee."

Web Security Context: Experience, Indicators, and Trust

Members of the W3C Web Security Context Working Group have published
a revised version of the Working Draft specification "Web Security
Context: Experience, Indicators, and Trust." It defines guidelines
and requirements for the presentation and communication of Web security
context information to end-users; and good practices for Web Site
authors. To facilitate access to relevant background, various sections
of this document are annotated with references to input documents that
are available from the Working Group's Wiki, and to pertinent issues
that the group is tracking. The documents in the wiki include background,
motivation, and usability concerns on the proposals that reference them.
They provide important context for understanding the potential utility
of the proposals. The W3C Web Security Context Working Group focuses on
the challenges that arise when users encounter currently deployed
security technology, such as TLS: While this technology achieves its
goals on a technical level, attackers' strategies shift towards
bypassing the security technology instead of breaking it. When users
do not understand the security context in which they operate, then it
becomes easy to deceive and defraud them.

XML Schema for Media Control

IETF announced that a new Request for Comments "XML Schema for Media
Control" is now available in online RFC libraries. The specification
has been produced by members of the IETF Multiparty Multimedia Session
Control (MMUSIC) Working Group. The RFC 5168 document defines an
Extensible Markup Language (XML) Schema for video fast update in a
tightly controlled environment, developed by Microsoft, Polycom,
Radvision and used by multiple vendors. This document describes a
method that has been deployed in Session Initiation Protocol (SIP)
based systems over the last three years and is being used across
real-time interactive applications from different vendors in an
interoperable manner. New implementations are discouraged from using
the method described except for backward compatibility purposes. New
implementations are required to use the new Full Intra Request command
in the RTP Control Protocol (RTCP) channel. The Multiparty MUltimedia
SessIon Control (MMUSIC) Working Group was chartered to develop
protocols to support Internet teleconferencing and multimedia
communications. These protocols are now reasonably mature, and many
have received widespread deployment. The group is now focussed on
the revisions of these protocols in the light of implementation
experience and additional demands that have arisen from other WGs
(such as AVT, SIP, SIPPING, and MEGACO)... The MMUSIC work items
are pursued in close coordination with other IETF WGs related to
multimedia conferencing and IP telephony (AVT, SIP, SIPPING, SIMPLE,
XCON, MEGACO and, where appropriate, MIDCOM and NSIS).

Unicode Consortium Announces Release of Unicode Standard Version 5.1

The Unicode Consortium has announced the release of Unicode Version 5.1,
containing over 100,000 characters, and provides significant additions
and improvements that extend text processing for software worldwide.
Some of the key features are: increased security in data exchange,
significant character additions for Indic and South East Asian scripts,
expanded identifier specifications for Indic and Arabic scripts,
improvements in the processing of Tamil and other Indic scripts,
linebreaking conformance relaxation for HTML and other protocols,
strengthened normalization stability, new case pair stability, plus
others given below. The Version 5.1.0 data files and documentation are
final and posted on the Unicode site. In addition to updated existing
files, implementers will find new test data files (for example, for
linebreaking) and new XML data files that encapsulate all of the Unicode
character properties. A major feature of Unicode 5.1.0 is the enabling
of ideographic variation sequences. These sequences allow standardized
representation of glyphic variants needed for Japanese, Chinese, and
Korean text. Unicode 5.1 contains significant changes to properties and
behaviorial specifications. Several important property definitions were
extended, improving linebreaking for Polish and Portuguese hyphenation.
The Unicode Text Segmentation Algorithms, covering sentences, words,
and characters, were greatly enhanced to improve the processing of Tamil
and other Indic languages. The Unicode Normalization Algorithm now
defines stabilized strings and provides guidelines for buffering.
Standardized named sequences are added for Lithuanian, and provisional
named sequences for Tamil. Unicode 5.1.0 adds 1,624 newly encoded
characters. These additions include characters required for Malayalam
and Myanmar and important individual characters such as Latin capital
sharp s for German. Version 5.1 extends support for languages in Africa,
India, Indonesia, Myanmar, and Vietnam, with the addition of the Cham,
Lepcha, Ol Chiki, Rejang, Saurashtra, Sundanese, and Vai scripts. The
Unicode Collation Algorithm (UCA), the core standard for sorting all
text, is also being updated at the same time. The major changes in UCA
include coverage of all Unicode 5.1 characters, tightened conformance
for canonical equivalence, clearer definitions of internationalized
search and matching, specifications of parameters for customizing
collation, and definitions of collation folding. The next version of
the Unicode locale project (CLDR) is also being prepared on the basis
of Unicode 5.1, and is now open for public data submission.

Thursday, April 3, 2008

IONA Becomes Silver Sponsor of the Apache Software Foundation

IONA announced that it has become a Silver Sponsor of The Apache Software
Foundation. The Apache Software Foundation (ASF) is a non-profit
corporation dedicated to consensus-based, collaborative software
development. Financial sponsorship will help ASF to acquire servers and
hardware infrastructure, purchase bandwidth and needed resources, and
increase awareness of ASF projects and incubating initiatives. IONA's
commitment to Open Source software is an integral part of its 15-year
heritage. With a high degree of Open Source community involvement, IONA
supports the efforts of its developers who are members and contributors
to a number of ASF projects. Aiding the efforts for increased adoption
of Open Source SOA, IONA developers play key roles in the Apache
ActiveMQ project, the Apache ServiceMix project, the CXF project in
the Apache Incubator, and the Apache Camel project, a sub-project of
ActiveMQ. IONA's distributed, Open Source SOA infrastructure solutions,
FUSE Message Broker, FUSE ESB, FUSE Services Framework and FUSE Mediation
Router, are built on code developed in those ASF projects and are
distributed under the terms of the Apache License 2.0. IONA provides
professional support, consulting and training for enterprise customers
looking to deploy this Open Source SOA technology in their mission-critical
business applications. IONA also recently announced the launch of Artix
Connect for WCF (Windows Communication Foundation). Artix Connect for
WCF enables Global 2000 customers to optimize their investments in
Microsoft technology and seamlessly extend connectivity with legacy
applications from within the Microsoft Visual Studio development
environment. By wrapping back-office legacy systems behind
standards-based Web Services Description Language (WSDL) interfaces,
Artix Connect for WCF allows the .NET developer to connect with Java
or CORBA without the need for custom adapters or new code generation.
The product enables companies to leverage existing investments in Java,
CORBA, and more, without leaving the Microsoft Visual Studio development
environment or requiring additional skills. Artix Data Services, a
component of IONA's Artix family of advanced SOA infrastructure products,
offers the broadest support for financial services standards, message
types and validation rules, including SWIFT, SEPA, FpML, TWIST, ISO
20022, CREST and FIX, with the ability to model any data format for
complete compliance.

Facebook Meets .Net

Facebook is a popular social network site and a destination for
application developers, but developers need to learn its peculiarities,
according to a VSLive conference presentation in San Francisco.
Development on Facebook is more like embedded development rather than
normal Web development, said speaker Jeffrey McManus, CEO of Platform
Associates, a consulting firm. Facebook is a platform featuring a
collection of technologies enabling developers to create applications
that incorporate Facebook data. This could include applications, for
example, that make Web services calls to Facebook and applications
that can run within Facebook. Technologies for developing applications
in Facebook include FBML (Facebook Markup Language) and IFrame, an HTML
construct that opens a hole in a page enabling display of another page
inside of it, according to McManus. Also factored into the equation is
Facbook.Net, a .Net library that wraps Web services and handles
authentication and other elements. Silverlight, Microsoft's new
multimedia presentation technology, also can be supported in Facebook
using FBML.

OASIS Open Reputation Management Systems (ORMS) Technical Committee

OASIS recently announced the formation of a new technical committee to
make it easier to validate the trustworthiness of businesses, projects,
and people working and socializing in electronic communities. The OASIS
Open Reputation Management Systems (ORMS) Technical Committee will
define common data formats for consistently and reliably representing
reputation scores. ORMS will be relevant for a variety of applications
including validating the trustworthiness of sellers and buyers in online
auctions, detecting free riders in peer-to-peer networks, and helping
to ensure the authenticity of signature keys in a web of trust. ORMS
will also help enable smarter searching of web sites, blogs, events,
products, companies, and individuals. Because the majority of existing
on-line rating, scoring and reputation mechanisms have been developed
by private companies using proprietary schemas, there is currently no
common method to query, store, aggregate, or verify claims between
systems. The different sources of reputation data -- user feedback
channels (product ratings, comment forms), online user profiles, etc. --
are each uniquely susceptible to bad actors, manipulation of data for
specific purposes, and spammers. ORMS will not attempt to define
algorithms for computing reputation scores. Instead, the OASIS Committee
will provide the means for understanding the relevancy of a score within
a given context.

Open Web SSO Project - Build 4

Developer blogs from the OpenSSO Project announce the release of OpenSSO
Version 1 Build 4. The Open Web SSO project (OpenSSO) provides core
identity services to simplify the implementation of transparent single
sign-on (SSO) as a security component in a network infrastructure.
OpenSSO provides the foundation for integrating diverse web applications
that might typically operate against a disparate set of identity
repositories and are hosted on a variety of platforms such as web and
application servers. This project is based on the code base of Sun Java
System Access Manager, a core identity infrastructure product offered
by Sun Microsystems. The objectives of the OpenSSO project are to provide
open access to an identity infrastructure source code; to enable
innovation to build the next generation of open network identity
services; and to establish open XML-based file formats and
language-independent component application programming interfaces (APIs).
New in OpenSSO Build 4, according to Pat Patterson's blog: (1) New
OpenSSO configurator; the developers request feedback on the new
configuration UI, via the project mailing lists; (2) WS-Trust Security
Token Service (STS) is available on Glassfish, Sun Application Server,
Sun Web Server, Geronimo, Tomcat and WebSphere; we've done a lot of
trickery with classloaders to get this working across a wide range of
containers... still working on support in Oracle Application Server,
JBoss and WebLogic Server; (3) Simplified STS client sample; (4)
Configuration and/or user store replication across multiple OpenSSO
instances where the embedded instance of OpenDS is in use; (5)
Security/SSL related fixes; (6) General bug fixes in all areas." Note:
OpenDS is an open source community project building a free and
comprehensive next generation directory service. OpenDS is designed
to address large deployments, to provide high performance, to be
highly extensible, and to be easy to deploy, manage and monitor. The
directory service includes not only the Directory Server, but also
other essential directory-related services like directory proxy,
virtual directory, namespace distribution and data synchronization.
Initial development of OpenDS was done by Sun Microsystems, but is
now available under the open source Common Development and Distribution
License (CDDL).

Using the Eclipse BPEL Plug-In for WS-BPEL V2.0 Business Processes

BPEL V2.0 is a powerful language intended to help in development of huge,
complex applications consisting of a lot of other components and Web
services. The BPEL vendor-neutral specification was developed by OASIS
to specify business processes as a set of interactions between Web
services. BPEL allows you to describe long-running workflows using
graphical editors to present workflows on human-friendly diagrams.
The Apache Foundation calls its implementation of the Web Services
Business Process Execution Language (WS-BPEL) V2.0 the Orchestration
Director Engine (ODE). ODE executes WS-BPEL processes, which are capable
of communicating with Web services, sending and receiving messages, etc.
The Eclipse BPEL project is a related open source project that provides
an Eclipse plug-in for the visual development of WS-BPEL V2.0 processes.
This article examines ODE V1.1 and the Eclipse BPEL project milestone M3,
describing how to create your own BPEL process and integrate it into
your application. Summary from the ODE web site: "WS-BPEL is an XML-based
language defining several constructs to write business processes. It
defines a set of basic control structures like conditions or loops as
well as elements to invoke web services and receive messages from
services. It relies on WSDL to express web services interfaces. Message
structures can be manipulated, assigning parts or the whole of them to
variables that can in turn be used to send other messages. Apache ODE
(Orchestration Director Engine) executes business processes written
following the WS-BPEL standard. It talks to web services, sending and
receiving messages, handling data manipulation and error recovery as
described by your process definition. It supports both long and short
living process executions to orchestrate all the services that are part
of your application."

Approval of ISO/IEC DIS 29500 as an International Standard

"'ISO/IEC DIS 29500, Information technology -- Office Open XML File
Formats', has received the necessary number of votes for approval as
an ISO/IEC International Standard... The Ballot Resolution Meeting
(BRM) was held in Geneva during the week 25-29 February 2008. By
eliminating redundancies, the comments had been reduced to just over
1,000 individual issues to be considered. Issues considered as
priorities by national members (such as accessibility, date formats,
conformance issues) were discussed, and the other comments were
addressed through a voting process on the remaining items, a system
agreed by the BRM participants. The issues addressed and revised have
resulted in sufficient national bodies withdrawing their earlier
disapproval votes, or transforming them into positive votes, so that
the criteria for approval of the document as an International Standard
have now been met. Subject to there being no formal appeals from ISO/IEC
national bodies in the next two months, the International Standard
will accordingly proceed to publication. ISO/IEC 29500 is a standard
for word-processing documents, presentations and spreadsheets that is
intended to be implemented by multiple applications on multiple
platforms. According to the submitters of the document, one of its
objectives is to ensure the long-term preservation of documents created
over the last two decades using programmes that are becoming incompatible
with continuing advances in the field of information technology. ISO/IEC
DIS 29500 was originally developed as the Office Open XML Specification
by Microsoft Corporation which submitted it to Ecma International, an
information technology industry association, for transposing into an
ECMA standard. Following a process in which other IT industry players
participated, Ecma International subsequently published the document
as ECMA standard 376. Ecma International then submitted the standard
in December 2006 to ISO/IEC JTC 1, with whom it has category A liaison
status, for adoption as an International Standard under the JTC 1 "fast
track" procedure. This allows a standard developed within the IT industry
to be presented to JTC 1 as a draft international standard (DIS) that
can be adopted after a process of review and balloting. This process
has now been concluded with the end of the 30-day period following the
ballot resolution meeting. The process was open to the IEC and ISO
national member bodies from 104 countries, including 41 that are
participating members of the joint ISO/IEC JTC 1."

Wednesday, April 2, 2008

Effective, Agile, and Connective

Composite applications built from predefined enterprise services form
the core of enterprise service-oriented architecture (enterprise SOA).
Ultimately the goal of enterprise SOA is composition of any service
implemented on any technology by any business partner anywhere in the
world. Open, standards-based technology is a key factor in achieving
this level of interoperability -- similar to plugging a telephone into
the wall. Some of the standards needed relate to the technology used
to implement enterprise SOA, while others define business semantics
and the languages used to describe them... In enterprise SOA, business
semantics consist of definitions of enterprise services and business
processes. These definitions must be described in a manner that allows
the technology layer of the architecture to use them to good effect.
There are three types of definition languages, for processes, service
interfaces, and message content. Process definition languages define
the sequence and conditions in which the steps in a business process
occur. With machine-readable definitions, a business process platform
can ensure that the steps are followed correctly. The need for this
ability is related to the way businesses work -- reacting to an event
with an activity. An event can be almost anything -- contact with a
customer or supplier or reception of an order or an invoice. Enterprises
need a way to describe -- clearly and unambiguously -- how the events
that occur relate to activities in the business. The most important
standard for defining processes is Business Process Modeling Notation
(BPMN). It provides a business-oriented, graphical way of identifying
events and describing activities in easy-to-understand diagrams.
Process definition is a critically important area for enterprise SOA,
and BPMN delivers good business value... Message definition languages
are used to define the structure and content of the data that an
enterprise service sends, receives, or consumes. For example, they
define that the same field always has the same name in all messages.
The languages also describe how to combine fields into larger structures,
how to specialize or extend fields and messages to meet specific needs,
and how to represent the message as an XML schema, for example. [A]
leading standard language for message definition is the UN/CEFACT Core
Components Technical Specification (CCTS). UN/CEFACT is the organization
that also developed the international version of EDI. CCTS provides a
rigorous methodology for defining data unambiguously and includes
rules about how to convert language-neutral definitions into XML. Clear,
consistent definitions of the messages used by enterprise services
deliver business value.

DMTF Chairman: New Possibilities in FY 2008

DMTF Chairman Mike Baskey provides an update on Distributed Management
Task Force activities: "During the past year, we've continued to
streamline the processes both within our organization and in our work
with alliance partners. We are also developing a Conformance Program
that will enable customers to test conformance with the set of standards
that DMTF and our alliance partners are defining. Moreover, we expect
to launch several key initiatives this fiscal year. In addition to the
great work within the System Virtualization, Partitioning, and Clustering
(SVPC) working group around models and profiles, we expect to publish
the Open Virtualization Format ( OVF) specification for virtual appliances.
Another DMTF initiative focuses on federation of CMDBs (configuration
management databases); we expect a preliminary release of the CMDBf
standard this year as well. The CMDBf work within DMTF will connect our
organization to the Information Technology Infrastructure Library (ITIL)
and related process management space to increase the relevance of the
work we do in this area. A third DMTF initiative involves power and
energy management and ties into our collaborative work with The Green
Grid. This important development will improve energy efficiency in the
data center, which has great social significance as we wrestle with
the challenges in that domain... DMTF will also continue to make
significant strides in the areas of server and desktop management --
particularly in the integration of Web services into those and other
related device management initiatives. In addition, a greater degree
of interoperability and conformance testing/certification will become
a reality in this coming year -- a very exciting milestone for our
organization. We're also moving forward in getting more of the DMTF
specifications submitted to the International Standards Organization
(ISO), an increasingly important requirement as we expand our role in
the world of international standards and our industry ecosystem..."

Semantic Web in the News

The Semantic Web has been in the news a bit recently. There was the
buzz about Twine, a "Semantic Web company", getting another round of
funding. Then, Yahoo announced that it will pick up Semantic Web
information from the Web, and use it to enhance search... Text search
engines are of course good for searching the text in documents, but
the Semantic Web isn't text documents, it is data. It isn't obvious
what the killer apps will be -- there are many contenders. We know
that the sort of query you do on data is different: the SPARQL standard
defines a query protocol which allows application builders to query
remote data stores. So that is one sort of query on data which is
different from text search. One thing to always remember is that the
Web of the future will have BOTH documents and data. The Semantic Web
will not supersede the current Web. They will coexist. The techniques
for searching and surfing the different aspects will be different but
will connect. Text search engines don't have to go out of fashion...
The Media Standards Trust is a group which has been working with the
Web Science Research Initiative [...] to develop ways of encoding the
standards of reporting a piece of information purports to meet: "This
is an eye-witness report"; or "This photo has not been massaged apart
from: cropping"; or "The author of the report has no commercial
connection with any products described"; and so on. Like Creative
Commons, which lets you mark your work with a licence, the project
involves representing social dimensions of information. And it is
another Semantic Web application. In all this Semantic Web news, though,
the proof of the pudding is in the eating. The benefit of the Semantic
Web is that data may be re-used in ways unexpected by the original
publisher. That is the value added. So when a Semantic Web start-up
either feeds data to others who reuse it in interesting ways, or itself
uses data produced by others, then we start to see the value of each
bit increased through the network effect.

Tim Berners-Lee and Distinguished Faculty to Present at LinkedData Planet

Ken North has provided updated information about the summer LinkedData
Planet Conference. Sir Tim Berners-Lee, Director of the W3C, will deliver
a keynote and a distinguished faculty will deliver a content-rich technical
program at in New York City (June 17-18, 2008). Besides the keynote, there
will be a Linked Data Workshop and a Power Panel. The conference is
co-chaired by Bob DuCharme and Ken North The evolution of the current Web
of "linked documents" to a Web of "linked data" is steadily gaining
mindshare among developers, architects, systems integrations, users, and
the more than 200 software companies developing semantic web-oriented
solutions. Organizations such as Adobe, Google, OpenLink Software, Oracle,
the W3C, and the grassroots Linking Open Data community have actively
provided technology and thought leadership during the embryonic stages of
this evolutionary transition. Notable examples on the Web today include,
DBpedia, the Zoominfo search engine, the Bambora travel recommendation
site, a number of social networking sites, numerous semantic web
technology-based services, various linked data browsers, SPARQL query
language and protocol-compliant data servers and data management systems,
and a growing number of web sites exposing machine-readable data using
microformats, RDFa, and GRDDL. The LinkedData Planet audience will include
system architects, enterprise architects, web site designers, software
developers, consultants and technical managers, all looking to learn more
about linking the growing collection of available data sources and
technologies to get more value from their data for their organizations.

Finding the Right ID

As Microsoft looks to advance its interoperability initiative, CardSpace
(the company's identity-management framework) promises to play a key
role in providing authentication between Windows and .NET-based
applications on the one end, and the Web, open source technology and
other key enterprise software platforms on the other. Microsoft lowered
a key barrier by adding support for the recently upgraded industry
standard OpenID specification into its CardSpace client identity-management
framework. Still, it could be some time before developers are called on
to use OpenID and CardSpace for cross-platform enterprise applications.
CardSpace is a key component of Microsoft's .NET Framework 3.5 and is
supported in Internet Explorer 7 and Windows. It's built largely on
Microsoft Windows Communication Foundation (WCF), serving as the
identity provider. While OpenID provides single sign-on to social
networking sites and blogs -- letting users log in one time to employ
a public persona across multiple sites -- it's not robust enough to
support government applications, casual Web surfing, financial
transactions or private data access. Microsoft's Chief Identity Architect
Kim Cameron has said in his Identity Weblog that the company is
interested in OpenID as part of a spectrum of solutions. But Cameron
has written that unlike redirection protocols such as SAML, WS-Federation
and OpenID, CardSpace limits the amount of personal information users
need to give out, making Web surfing more secure. Microsoft describes
CardSpace as an identity selector -- the user creates self-issued cards
and associates a limited set of identity data with each. The CardSpace
user interface is security-hardened, and the user decides what
information will be provided.

W3C XML Query Working Group Invites Comment on XQuery Working Drafts

Members of the W3C XML Query Working Group have published two First
Public Working Drafts: "XQuery Scripting Extension 1.0" and "XQuery
Scripting Extension 1.0 Use Cases." The XQuery Scripting Extension 1.0
specification defines an extension to "XQuery 1.0: An XML Query
Language" (W3C Recommendation") and "XQuery Update Facility 1.0
(W3C Candidate Recommendation). Expressions can be evaluated in a
specific order, with later expressions seeing the effects of the
expressions that came before them. This specification introduces the
concept of a block with local variable declarations, as well as several
new kinds of expressions, including assignment, while, continue, break,
and exit expressions. The "Use Cases" document provides the usage
scenarios that motivate the changes developed in the XQuery Scripting
Extension (XQSE).

Working Group Formed to Support ODRL Service (ODRL-S) Profile

On behalf of the ODRL Initiative, Renato Iannella announced the
formation of a new ODRL Services (ODRL-S) Profile Working Group,
chartered to develop the semantics for licensing Service-Oriented
Computing (SOC) services. The Open Digital Rights Language (ODRL)
Initiative is an international effort aimed at developing and
promoting an open standard for rights expressions. ODRL is intended
to provide flexible and interoperable mechanisms to support transparent
and innovative use of digital content in publishing, distributing and
consuming of digital media across all sectors and communities. The new
profile will build upon prior work completed at the University of
Trento on service licensing. The WG will develop an ODRL Profile that
extends the ODRL language to support the SOC community requirements.
The profile will address the core semantics for the licenses to enable
services to be used, reused, and amalgamated with other services. By
expressing the license terms in ODRL, greater features can be supported,
such as automatically detecting conflicts in service conditions, and
making explicit all requirements and conditions. ODRL-S is designed
as a complementary language to describe licensing clauses of a service
in machine interpretable form. The salient features of ODRL-S are as
follows: (1) ODRL-S unambiguously represents service licensing clauses --
based on formalization of licensing clauses; (2) ODRL-S is simple yet
powerful and fully extensible language; (3) ODRL-S can specify licenses
at service level and service operation level; (4) ODRL-S can be used
with any of existing service description standards and languages; (5)
ODRL-S is developed as a completely compatible profile with ODRL for
describing a service license.

Web Services for Remote Portlets (WSRP) Specification Version 2.0

OASIS announced that the membership has voted to approve Version 2.0 of
the "Web Services for Remote Portlets Specification" as an OASIS Standard,
updating the WSRP Version 1.0 OASIS Standard published in August 2003.
The goal of the specification is to enable an application designer or
administrator to pick from a rich choice of compliant remote content
and application providers, and integrate them with just a few mouse
clicks and no programming effort. The OASIS WSRP Technical Committee
was chartered to standardize presentation-oriented Web services for use
by aggregating intermediaries, such as portals. The TC members work
to simplify the effort required of integrating applications to quickly
exploit new web services as they become available. WSRP ayers on top
of the existing web services stack, utilizing existing web services
standards and will leverage emerging web service standards (such as
policy) as they become available. The interfaces defined by this
specification use the Web Services Description Language (WSDL). WSRP
version 2 extends the Version 1.0 definitions to support more advanced
use cases, providing: (1) coordination between components, (2) the
ability to move customized portlets across registration and machine
boundaries; (3) a mechanism for describing protocol extensions; (4)
support for leasing of resources; (5) in-band means of getting resources;
(6) aupport for the CCPP protocol (device characteristics). WSRP Version
2.0 consists of a prose specification that describes the web service
interface that is exposed by all instances of compliant Producers as
well as the semantics required both of the service and its Consumers,
together with a WSRP version 2 XML schema, WSRP version 2 portTypes
(WSDL), and WSRP version 2 bindings (WSDL).

Thursday, March 27, 2008

Jacquard: a Methodology for Web Publishing

This article introduces Jacquard, a software development methodology
specialized for Web projects, and especially for Web development among
diverse teams. Jacquard seeks to align the work and goals of business
interest personnel, Web designers, programmers, project managers,
database analysts, and more. The author discusses the core principles
of Jacquard, and provides an example of its use in communication between
a user experience team and a programmer team. He uses the W3C's Simple
Knowledge Organization System (SKOS), which is a very useful technology
for the expression of ideas in a way natural to humans, but in a very
Web-ready format (RDF) -- together with the Turtle syntax for RDF, which
is easier to read than RDF/XML. The Jacquard methodology requires formal
expression of the core concepts in a way that can be a shared reference
across the various teams... Jacquard (pronounced like "jack-card" with
more emphasis on the second syllable) is a software development methodology
specialized for Web projects, and especially suited for such development
among diverse teams. The Web is in many ways different from any
information platform before it, and this suggests a fresh approach to
development and teamwork. In general it makes sense to look outward to
the Web, and not inward and backward to traditional methodologies, to
find what works. Lightweight, agile process mirrors the basic nature of
the Web, and so does focusing on the data, and how data is organized for
sharing. The specific application or database implementation is not as
important, nor are the tools you choose to use. This mirrors the Web,
which builds on sharing data, and does not require uniformity of
implementations. As such, implementation independence is one of the core
principles of Jacquard. Another principle is support for decentralized
communication. The Web works well across geographical boundaries, and
with the increase of off-shore outsourcing and flexible work arrangements,
it's useful to learn lessons on decentralization and rich communication.
The Web is such a rich information space that some philosophically
consider it a realm of its own which parallels, and sometimes intersects,
our own real world -- the idea of "cyberspace." Paying attention to where
idioms on the Web draw from real-world concepts and phenomena is important
to usability, and so Jacquard's principle of conceptual alignment
encourages you to take care to express the concepts behind your Web
project, and to make that clear expression the foundation for
communication on the project.

Apache POI: Java API To Access Microsoft Format Files

Microsoft has announced a new partnership with Sourcesense, a leading
European open source systems integration consultancy. The two companies
will collaborate on the strategy, development and deployment of open
source solutions for the Microsoft Office product suite. One of the
initial goals of the partnership is contributing to the development of
a new version of Apache POI, a top-level project of the Apache Software
Foundation (ASF). Widely used in financial services and critical
enterprise applications across related sectors, Apache POI is a leading
open source file format reader and writer to create, edit and read
Microsoft Office formats used in Excel, Word, PowerPoint and Visio.
Apache POI is a Java application programming interface (API) used to
access and manage Microsoft Office binary formats, and can be easily
applied to today's billions of binary format documents by alleviating
the need for complex programming and/or reverse engineering. Because
Apache POI libraries are used in numerous open source projects, developing
future libraries to support the Ecma Office Open XML File Formats (the
default file format in the 2007 Microsoft Word, Excel and PowerPoint
products) will play an important role in new interoperability scenarios
where XML-based standard formats will be key for Office documents. Apache
POI support for Open XML is currently in development within the Apache
Software Foundation; its first release is anticipated during the second
quarter of 2008. Code contributions are made by ASF members and committers
(developers authorized to 'commit' or 'write' code, patches or
documentation to the ASF repository), and overseen by the Apache POI
Project Management Committee (PMC). Details are published in the
Microsoft press release "Microsoft and Sourcesense Partner to Contribute
to Open Source, Apache POI to Support Ecma Office Open XML File Formats."

OASIS Open Standards Symposium 2008

OASIS announced that "Composability within SOA" will be the focus of
Open Standards 2008, the fifth annual symposium hosted by OASIS. This
event, which will be held in Santa Clara, California, 28-April-2008
through 1-May-2008, will examine the critical issues faced when
architecting service-oriented applications and the benefits being
reaped by real-world implementations that take advantage of Web services
transactions. Presentations on the Business Process Execution Language
(BPEL), Service Component Architecture (SCA), Service Data Objects (SDO),
WS-Transaction, and related standards will be featured. In an Open
Standards 2008 keynote address, Peter Carbone, Vice President, SOA,
Office of the CTO at Nortel, will share insights on the new realities
presented by communications-enabled applications and the opportunities
they create for standards development, software vendors, and service
providers. OASIS has announced the launch of a new Telecommunications
Services Member Section which will work to bring the full advantages
of SOA to the telecommunications industry. At Open Standards 2008, the
OASIS Open CSA Member Section will host a table-top exhibition showcasing
SCA and SDO supporters, BEA, IBM, Primeton, Rogue Wave, SAP, Software AG,
and Sun Microsystems. Executives from these companies will participate
in a press briefing on the current state of SCA on Tuesday, 29-April-2008.