Search This Blog

Tuesday, October 30, 2007

Extending XForms to Enable Rich Text Editing

XForms provides a strict processing model for XML content. The XForms
standard defines controls (text input, combo boxes, text areas, and
more) that allow for editing text within a given XML element or
attribute. Using the proliferation of rich text editing across many
Web-based applications (such as e-mail, blogs, and wikis), the XForms
set of controls can be expanded to accommodate this. This article
shows to extend the standard XForms set of controls to provide this
rich text editing. There are many HTML and ECMAScript rich text
editors for HTML content;for the purposes of this article, we use
Dojo, and provide a sample using FCKEditor as well. Since we require
XForms and a rich text editor, we also need a mechanism to bind the
editor's content to an XForm's instance. This could be accomplished
by writing a bunch of JavaScript or using another technology for
binding user instance controls, called XML Binding Language (XBL).
Mozilla XForms provides a way of extending existing user interface
controls using XBL, which also makes this choice desirable. By
following some of the integration rules defined by XForms, XBL, and
a rich text editor, the end result is a simple and powerful addition
to the XForms set of controls. This can further enable the application
of XForms in a variety of applications, such as blogs, e-mails, social
networking sites, and more. These can then leverage the built-in
capabilities of XForms for validation, XML submission, declarative
programming, and more. More Information See also XML and Forms: Click Here

XForms 1.0 Third Edition Published as a W3C Recommendation

W3C has announced the publication of "XForms 1.0 (Third Edition)" as a
W3C Recommendation, signifying that there is significant support for
the specification from the Advisory Committee, the W3C Team, W3C Working
groups, and the public. Forms are an important part of the Web, and they
continue to be the primary means for enabling interactive Web applications.
Web applications and electronic commerce solutions have sparked the
demand for better Web forms with richer interactions. XForms 1.0 is the
response to this demand, and provides a new platform-independent markup
language for online interaction between a person (through an XForms
Processor) and another agent, usually remote. XForms is an XML application
that represents the next generation of forms for the Web. It splits
traditional XHTML forms into three parts: XForms model, instance data,
and user interface. By this means, XForms separates presentation from
content, allows reuse, and provides strong typing. This design reduces
the number of round-trips to the server, and offers device independence
with a reduced need for scripting. XForms 1.0 XForms striva to improve
authoring, reuse, internationalization, accessibility, and overall
usability. The XForms Recommendation document responds to implementor
feedback, brings the XForms 1.0 Recommendation up to date with second
edition errata, and reflects clarifications already implemented in XForms
processors. W3C reports that the Recommendation-level specification
contains 343 diffs that have significantly hardened XForms for enterprise
deployment. The XForms 1.0 Third Edition Test Suite was used in
interoperability testing, including tests for: Document Structure;
Processing Model; Datatypes; Model Item Properties; XPath Expressions
in XForms; Form Controls; XForms User Interface; XForms Actions; Submit
Function; XForms and Styling. More than twenty-five (25) XForms
Implementations were reported as of 2007-10-29. More Information

World Wide Web Consortium Launches Office In Brazil

W3C announced the launch of its first W3C Office in South America: the
W3C Brazil Office, hosted by the NIC.br (Brazilian Network Information
Center) institute, in Sao Paulo, Brazil. W3C looks forward to increasing
interaction with the Portuguese-speaking community through this Office.
Moreover, the current IT landscape in Brazil aligns with exciting
current trends at W3C, such as mobile Web, Web applications, and video
on the Web. Brazil ranks with Russia, India and China -- countries
identified by the acronym BRIC in a 2003 report by the Goldman Sachs
Investment Bank -- as a rapidly growing emerging economy. According to
the report, these economies together may well surpass most of today's
richest countries by the year 2050. Initiatives from the private sector
and efforts by government agencies have promoted investment in business
and infrastructure, from domestic and international investors alike.
Brazil's diversity places the country in a position of distinction in
the South American continent and strongly influences the attraction of
foreign investment. It is the fifth largest country on the planet,
responsible for a very promising, predominantly urban, market.
Approximately 40 million Brazilians have Internet access, the highest
number of Internet users of any country in Latin America.
Telecommunications Industry News reported in October 2007 that the
number of wireless users in Brazil exceeds 112 million. Brazilian
companies compete effectively in a global market, and have delivered
world class solutions in areas of mobile banking, open-source technology,
Web accessibility, wireless Internet access, games industry, e-government
solutions and HD digital television. Regarding HDTV, the development
of a specific model of digital television turns the Brazilian market
into a gigantic laboratory for studying the application of that
technology. As its Members work to realize the full potential of the
Web, W3C collaborates with regional organizations wishing to further
W3C's mission. The W3C Offices assist with promotion efforts in local
languages, help broaden W3C's geographical base, and encourage
international participation in W3C Activities. W3C has Offices in
Australia; the Benelux countries; Brazil; China; Finland; Germany and
Austria; Greece; Hungary; India; Israel; Italy; Korea; Morocco; Southern
Africa; Spain; Sweden; and the United Kingdom and Ireland. More Information See also the Brazilian W3C web site: Click Here

Field Report: MIX Proves XBRL Handles More than Statutory Reporting

Most people who have heard of Extensible Business Reporting Language
(XBRL) associate it with regulatory submissions. U.S.-based public
companies have the option of filing XBRL-tagged results with the
Securities and Exchange Commission (SEC), but it is used more
intensively elsewhere. For example, XBRL is now mandatory for companies
listed on the Shanghai Stock Exchange (among others), and banks now
routinely use XBRL to file financial reports with the U.S. Federal
Deposit Insurance Corporation (FDIC), the Bank of Japan and other
oversight bodies. XBRL has the potential to be even more useful than
this, particularly since the "extensible" aspect of the language
means that any organization can create its own taxonomy to collect
any kind of information. One organization that is pulling in both
standard accounting information and other performance metrics is the
Microfinance Information Exchange (MIX). MIX uses a microfinance-specific
version of the International Financial Reporting System (IFRS) taxonomy,
so the accounting data adheres to a broadly supported standard. And
because XBRL is extensible, it enables MIX to add its own taxonomy for
social reporting metrics. Extensibility facilitates the evolution of
the information MIX collects and allows it to make apples-to-apples
comparisons over time. MIX is in the final stages of system development
and will open its XBRL-enabled system to participating institutions
soon. MIX is an early example of how organizations other than financial
regulators will use XBRL to manage the exchange of data among multiple
entities (corporations, nonprofits, nongovernmental organizations and
others). This is especially true because XBRL allows the data collection
process to have low overhead and take a lowest-common-denominator
approach. More Information See also the XBRL FAQ document: Click Here

Radar Networks Ties Together Web 2.0, Semantic Web With 'Twine'

This article presents an online knowledge management service that ties
together social networking, wikis, and blogging with RDF, OWL, SPARQL,
and XSL technologies. Startup Radar Networks has launched in private
beta an online knowledge management system that's among the first to
use computer-driving semantic Web technologies to find and organize
information for people. Called Twine, the service was unveiled at the
Web 2.0 Summit in San Francisco last week. The service has elements of
Web 2.0 technologies, such as social networking, wikis, and blogging,
but goes a step further with an underlying platform built on Web 3.0
technologies defined by the Worldwide Web Consortium. Those technologies
include RDF (Resource Definition Framework), OWL (a markup language),
SPARQL (an RDF query language), and XSL (Extensible Stylesheet Language).
In general, the service enables a person, or groups of people, to
organize information and share it with others. People can upload
contacts, pictures, and documents from their desktops, and save text,
videos, and images from Web sites. Twine also uses software agents to
import content and metadata from other sites, based on the knowledge
the system builds about the user... Data brought into Twine is analyzed
and tagged, with the system understanding if the keywords refer to
people, places, or things. The tags are listed on a user's Twine page.
Clicking on the keyword will bring up all the related information
saved by the user or shared by other people in his network. Radar
Networks, funded by Leapfrog Ventures and Microsoft co-founder Paul
Allen's Vulcan Capital, believes that the semantic Web will enable it
to build a knowledge network that provides users with a richer
experience than other services using older technologies. More Information See also Tim O'Reilly's blog: Click Here

SOA Grid: Grid-Enabled SOA for Scalability

The use of data-grid technology in service-oriented architectures
(SOAs) was the subject of a keynote address at the first annual IT
Architect Regional Conference in San Diego, which took place last week.
Dave Chappell, Oracle's VP and chief technologist for SOA, spoke on
the topic of "Next Generation Grid Enabled SOA" at the IASA event.
Chappell described the sort of problems that happen when processing
large amounts of XML data and trying to ensure reliability and
scalability in an SOA. Oracle's model for grid-enabled SOA stems from
technology that the company acquired about seven or eight months ago
when it acquired Tangosol. Oracle now offers this technology for
mission-critical applications, typically involving extreme transaction
processing, through its Coherence product line. A few noteworthy
technologies and concepts have helped enable SOAs, including: (1) The
use of business process orchestration tools, such as Business Process
Execution Language (BPEL) engines; (2) Basic SOA patterns for building
composite apps that are constructed from service functionalities; (3)
Loose coupling and modularity. However, in the process of using these
technologies -- and by choosing to use XML as the means for exchanging
data between apps and services -- the size of the data that is being
shipped around has been inflated by a factor of five, Chappell said.
With SOA, application silos are separated out and exposed as services.
Such an arrangement presents problems in how to share and manage
information across these services. More Information

Microsoft Sets Oslo Project for Model-Centric Applications

Microsoft has unveiled what could be an industry-changing effort in
application modeling and SOA with its "Oslo" project -- which could
significantly change the equation in the Windows application deployment
space. Part of Oslo involves delivering a unified platform integrating
services and modeling, Microsoft said. But instead of models describing
the application, models are the applications themselves. Oslo is a
codename for a set of technical investments that will be delivered in
the next major versions of Microsoft's platform products; these products
include Visual Studio, System Center, BizTalk Server, BizTalk Services,
and the .Net Framework. Beta releases of Oslo technology are due in
2008. With Oslo, Microsoft is making investments aligned with a vision
to simplify the effort needed to build, deploy, and manage composite
applications within and across organizations. The effort builds on
model-driven and service-enabled principles and extends SOA beyond
the firewall. Featured in Oslo are three fundamental components: a
modeling environment, a business process server, including a significant
evolution of BizTalk Server, and a new deployment model. BizTalk Server
"6," will continue to offer technology for distributed SOA and BPM and
include capabilities for composite applications. BizTalk Services "1,"
which provides BizTalk capabilities within the cloud, will feature
Web-based services for hosting composite applications that cross
organizational boundaries; advanced messaging, identity, and workflow
will be featured. Metadata repositories will be aligned across server
and tools products, including System Center "5," Visual Studio "10,"
and BizTalk Server "6." Each will utilize repository technology for
managing, versioning, and deployment models. Release dates of
Oslo-driven products have not been set. More Information

Give Your Applications Mapping Capabilities, Part 1

Some of the most interesting features of modern web sites are based
on Geographical Information System (GIS) technologies. GIS techniques
essentially give you a way to manage and show geographical data in
your systems. For example, a manufacturing company can display a map
showing every building it occupies, every office in a building, or
the location of every sale it makes (worldwide), or a cab company can
use GIS data to track the position of its cabs nearly in real time.
Not too long ago, the expense and rarity of the maps themselves hindered
the use of GIS data in applications, but today, full-featured maps are
available through Google Maps, Google Earth, and Microsoft Virtual
Earth (among others) that you can use to display your GIS data in web
applications. The advent of such mapping systems is one of the most
exciting technologies to emerge in the last few years -- and they're
still undergoing constant and rapid evolution. Today, you can easily
collect geographical data, analyze and filter that data, and merge it
with a mapping provider to create maps that display the data to your
users. This article gives you a launch point by exploring what GIS data
is, how to collect it, and how to manage it. The first step in developing
a GIS business system is to collect the geographical data, a process
interchangeably called geomapping or geocoding. Both terms refer to
the process of retrieving the real-world position of the objects or
places you want to map. This process is simple for single-pointed
objects, either static or moving, such as a building, a car, an antenna --
objects you would show as a point on a map. But the process becomes more
difficult when you want to map lines and areas. Fortunately, the
technologies involved are basically the same; first you learn how to
work with points, and then you extend the process to work with lines
(pairs of points) and then areas (sequences of lines)... An example
Google Earth XML structure is written in an XML language called Keyhole
Markup Language (KML). The most important portion of the code is the
'Point' element, which defines a point using the 'coordinates' node;
the 'coordinates' node takes three comma-separated values that specify
longitude, latitude, and altitude... Getting started with GID
applications is quite simple: you collect points via GPS or existing
maps, and render them using a mapping application. You don't need
expensive hardware, and you can write simple software without too much
work. With a little effort, you can write exciting business applications
that feature 2D maps or 3D earth-rendering systems. More Information

IBM Offers 'SOA Healthcheck' Workshops

IBM's global services organization is adding "health check" services to
its repertoire of technical services for SOA deployments, looking to
assist users dealing with issues resulting from poor planning or
partnerships with inexperienced or so-called proprietary IT vendors.
The company also is offering its "identity-aware ESB," which is an
enterprise service bus that combines existing IBM products to provide
identity management capabilities. Health-check services and software
will be offered in two workshops to be held at customer sites, featuring
specialized diagnostics and triage capabilities to help identify
potentially unhealthy areas and recommend cures for problem areas. The
applications and services workshop is intended to provide assurance
that an SOA can expand beyond pilot projects. Factors such as application
reuse and service use will be assessed, as will identification of rogue
services as part of a governance policy. Security also will be checked
for service controls and identity management. The infrastructure workshop
features an assessment of infrastructure supporting applications and
services layers in an SOA. Elements examined include infrastructure
flexibility, the ability to adapt to spikes in demand, and verifying
SOA configurations for connectivity. A service management review ensures
that services are being monitored. IBM's identity-aware ESB combines
WebSphere ESB products with Tivoli security and identity management
software to help ensure that access to information, services, and
applications is protected. Auditing of identity and access activity is
enabled.

OASIS Issues Call for Participation: Service Data Objects (SDO) TC

OASIS announced the formation of a new Service Data Objects (SDO)
Technical Committee. The purpose of this TC is to evolve and standardize
the specifications defining the Service Data Objects architecture and
API. Service Data Objects (SDO) is a data programming architecture and
an API whose main purpose is to simplify data programming. The key
concepts, structures and behaviours of SDO will be defined by the SDO
for Java specification from the JCP and the same SDO functionality
defined by the Java specification available from C++. As far as possible,
the SDO behaviour should behave consistently across the languages while
also fitting naturally into each language's native programming
environment. The first phase of work will be for SDO use with the C++
programming language. In particular, this TC shall maintain functional
equivalence with the SDO for Java V2.1.1 Specification, under the
stewardship of the Java Community Process (JCP). This TC will continue
development of the "SDO for C++ V2.1" specification and aim to establish
it as an OASIS Standard. In a second phase, the TC will evolve the SDO
specifications (for both Java and C++) to a Version 3.0 level of
functionality. Further programming languages may be selected from the
scoped list by the TC. The TC is encouraged to consider an effective
manner of evolving SDO functionality, keeping the multiple language
specifications current and in alignment. Service Data Objects (SDO)
Version 3.0 is intended to build upon the SDO Version 2.1.1 architecture
and APIs by providing additional functionality to further simplify data
programming so that developers can focus on business logic rather than
the underlying technologies. Subject to the agreement of the Member
Section Steering Committee, the new TC will affiliate with the Open
CSA Member Section as it commences work.

Microsoft Joins the Open Geospatial Consortium (OGC)

In a move that is bound to have lasting repercussions for geospatial
application developers, Microsoft has formally joined the Open
Geospatial Consortium (OGC), a nonprofit standards organization. The
move underlines Microsoft's commitment to make its geospatial
applications -- including Microsoft Virtual Earth and SQL Server
2008 -- conform to open standards, which will make it easier for
third-party developers to integrate their own applications more
effectively. According to Ed Katibah, Microsoft's spatial program
manager for SQL Server, SQL Server 2008, which introduces spatial data
types and methods, was designed to conform to OGC standards. The new
version of the database, which is expected to be released in the second
quarter of 2008, will undergo testing in the next few weeks to ensure
its conformity. OGC Chairman and Chief Executive Officer David Schell
said that Microsoft's decision to join OGC represents a major change
in the industry. In its early years, OGC was supported primarily by
developers of geospatial tools for vertical markets, such as ESRI and
Autodesk. The recent addition of Google and now Microsoft represents
a sea change, according to Schell. Schell expects Microsoft's
participation to serve as a stabilizing force. As developers build new
applications they can be assured that, by following OGC standards,
their efforts will not meet with immediate obsolescence as a result
of some major company introducing a new standard that suddenly changes
everything. Schell: "The center of gravity of the market is now shifting;
this really does indicate a significant maturation in the industry. It
indicates a very broad acceptance of geospatial information as part of
infrastructure development. And it also indicates that the dialogue
concerning the harmonization of spatial best practices has reached the
highest level." OGC is an international industry consortium of 346
companies, government agencies and universities participating in a
consensus process to develop publicly available interface specifications.
OpenGIS Specifications support interoperable solutions that "geo-enable"
the Web, wireless and location-based services, and mainstream IT.

The Hypertext Transfer Protocol (HTTP) Entity Tag ("ETag") Response

A revised version of the IETF specification "The Hypertext Transfer
Protocol (HTTP) Entity Tag ('ETag') Response Header in Write Operations"
has been released in connection with the formation of a new HTTPbis
Working Group activity. The Hypertext Transfer Protocol (HTTP)
specifies a state identifier, called "Entity Tag", to be returned in
the "ETag" response header. However, the description of this header
for write operations such as PUT is incomplete, and has caused confusion
among developers and protocol designers, and potentially interoperability
problems. This document explains the problem in detail and suggests
both a clarification for a revision to the HTTP/1.1 specification
(RFC 2616) and a new header for use in responses, making HTTP entity
tags more useful for user agents that want to avoid round-trips to the
server after modifying a resource. The RFC 2616 specification is a bit
vague about what an ETag response header upon a write operation means,
but this problem is somewhat mitigated by the precise definition of a
response header. The proposal for enhancing RFC 2616 in this regard is
made in document Section 3. More Information See also the related issue 'Clarify "Requested Variant"': Click Here

Updated W3C Working Draft: XMLHttpRequest Object for Ajax

Members of the W3C Web API Working Group have released an updated
Working Draft of "The XMLHttpRequest Object" specification, superseding
the document of 2007-06-18. The core component of Ajax, the
XMLHttpRequest object is an interface that allows scripts to perform
HTTP client functions, such as submitting form data or loading data
from a remote Web site. The name "XMLHttpRequest" is used for
compatibility with the Web, but may be misleading. First, the object
supports any text based format, including XML. Second, it can be used
to make requests over both HTTP and HTTPS (some implementations support
protocols in addition to HTTP and HTTPS, but that functionality is not
covered by this specification). Finally, it supports "requests" in a
broad sense of the term as it pertains to HTTP; namely all activity
involved with HTTP requests or responses for the defined HTTP methods.
The XMLHttpRequest object can be used by scripts to programmatically
connect to their originating server via HTTP. The document is being
produced as part of the Rich Web Clients Activity in the W3C Interaction
Domain. With the ubiquity of Web browsers and Web document formats
across a range of platforms and devices, many developers are using the
Web as an application environment. Examples of applications built on
rich Web clients include reservation systems, online shopping or
auction sites, games, multimedia applications, calendars, maps, chat
applications, weather displays, clocks, interactive design applications,
stock tickers, currency converters and data entry/display systems. Web
client applications typically have some form of programmatic control.
They may run within the browser or within another host application. A
Web client application is typically downloaded on demand each time it
is "executed," allowing a developer to update the application for all
users as needed. Such applications are usually smaller than regular
desktop applications in terms of code size and functionality, and may
have interactive rich graphical interfaces. More Information See also W3C Rich Web Clients: Click Here

Sieve Email Filtering: Representing Sieves and Display Directives in XML

This document describes a way to represent Sieve email filtering language
scripts in XML. Sieve ("Sieve: An Email Filtering Language") is a
language for filtering email messages at or around the time of final
delivery. It is designed to be implementable on either a mail client or
mail server. It is meant to be extensible, simple, and independent of
access protocol, mail architecture, and operating system and it is
intended to be manipulated by a variety of different user interfaces.
Some user interface environments have extensive existing facilities for
manipulating material represented in XML. While adding support for
alternate data syntaxes may be possible in most if not all of these
environments, it may not be particularly convenient to do so. The obvious
way to deal with this issue is to map sieves into XML, possibly on a
separate backend system, manipulate the XML, and convert it back to
normal Sieve format. Several Sieve extensions have already been specified
(RFC 3431, RFC 3598, RFC 3685, RFC 3934) and many more are planned. The
set of extensions available varies from one implementation to the next
and may even change as a result of configuration choices. It is therefore
essential that the XML representation of Sieve be able to accommodate
Sieve extensions without requiring schema changes. It is also desirable
that Sieve extensions not require changes to the code that converts to
and from the XML representation. This specification defines an XML
representation for sieve scripts and explains how the conversion process
to and from XML works. The XML representation is capable of accommodating
any future Sieve extension as long as the underlying Sieve grammar
remains unchanged. Furthermore, code that converts from XML to the
normal Sieve format requires no changes to accommodate extensions,
while code used to convert from normal Sieve format to XML only requires
changes when new control commands are added -- a rare event. An XML
Schema and sample code to convert to and from XML format are also
provided in the appendices. More Information
See also the IETF Sieve Mail Filtering Language (SIEVE) Working Group Charter: Click Here

Monday, October 29, 2007

Mashups: The Evolution of the SOA

This article is first in a three-part series, providing a general
overview of the characteristics and technologies related to the term
Web 2.0 so that a platform can be laid for a detailed discussion about
how they relate to Service-Oriented Architecture (SOA) development.
The second part in the series examines the current state of IT and SOA
in the enterprise and discusses what situational applications and a
mashup ecosystem can offer. The third part describes the IBM Mashup
Starter Kit (IBMMSK) and how you can use it to develop situational
applications. Web 2.0 is best described as a core set of patterns that
are observable in applications that share the Web 2.0 label. These
patterns are services, simplicity, and community. This shift to a
service-based model has implications for how Web 2.0 applications are
now developed. The Web infrastructure is now seen as the bottom of the
application development stack.The prevalence of Web APIs lets you avoid
the work of creating certain features, thereby reducing your workload
so you can build applications faster. In addition, you can integrate
two or more of these Web APIs to create something new and unique, known
as a mashup. Web-based APIs can then be invoked using technologies,
such as Ajax, which provides a means for the browser client to
communicate with the server via JavaScript (both synchronously and
asynchronously). This means the application doesn't require the entire
page to be reloaded every time the client needs to communicate with
the server. You can use JavaScript Object Notation (JSON) to serialize
and deserialize objects so that they can be sent between the browser
client and the server via Ajax. It's now quite common to see existing
services provide SOAP, Ajax, and REST interfaces. RSS and Atom feeds
have now gone beyond being used only to subscribe to blogs and news
feeds, and are seen as potential approaches to simplify specific
content-centric application architectures. The Atom specification is
a more recent evolution of the ideas originally embodied by RSS and
provides useful features, such as the Atom Publishing Protocol (APP),
which lets you publish information to be added to a feed. If event-driven
architectures are now part of the SOA framework, then feeds can be
considered part of the service paradigm and should be leveraged as such. More Information

Data Sources as Web Services

WSO2 ('WS OH 2') is growing in popularity and the team continues to
produce quality products. The WSO2 (a.k.a. Web Services Oxygen Tank)
team that has pulled together a platform around Apache SOA projects,
producing an application server, ESB, Web 2.0 mashup engine, and more.
Most recently, they have released Version 2.1 of the Web Services
Application Server (WSAS). The release includes a lot of compelling
features, but this article focuses upon WSO2 Data Services -- a new
feature available in WSO2's WSAS 2.0 platform. The author introduces
Data Services, examining their architecture and utilization, and
exploring pros and cons of this convenient feature. Data services are
standard web services that have been configured within WSAS to map
to data source calls to one or more backend data sources. Configuration
is captured in XML and can either be performed by hand and uploaded as
a complete deployment module, or deployed via the web-based Data
Service configuration wizard. Once deployed, these services can either
be consumed by other WSAS services or be made available to external
clients. Data Services are essentially the SOA-equivalent of the Data
Access Object pattern. Granted, Data Services are at a much higher
level of abstraction, but they serve a similar role in a layered
architecture. They enable higher level services or even client
applications to access underlying datasets without regard for the
implementation details involved... The heart of any enterprise
application is data. Applications provide the ability to view, sort,
filter, edit, create, and delete data. In a SOA, access to data is
also paramount. Typically this involves wrapping an existing business
object (EJB or POJO) with a web service. Another option is to bypass
this additional layer and directly expose data capabilities via WSO2
Data Services. Data services are convenient, configurable, and great
for service oriented data for a demo or even as a part of a SOA. More Information

RDFa Primer: Embedding Structured Data in Web Pages

The updated version of "RDFa Primer" provides an introduction to RDFa,
a method for structuring data by embedding in XHTML. This version of
the RDFa Primer is a substantial update to the previous version,
representing several design changes since the previous version was
published. Current Web pages, written in XHTML, contain inherent
structured data: calendar events, contact information, photo captions,
song titles, copyright licensing information, etc. When authors and
publishers can express this data precisely, and when tools can read
it robustly, a new world of user functionality becomes available,
letting users transfer structured data between applications and Web
sites. An event on a Web page can be directly imported into a desktop
calendar. A license on a document can be detected to inform the user
of his rights automatically. A photo's creator, camera setting
information, resolution, and topic can be published as easily as the
original photo itself. RDFa lets XHTML authors express this structured
data using existing XHTML attributes and a handful of new ones. Where
data, such as a photo caption, is already present on the page for
human readers, the author need not repeat it for automated processes
to access it. A Web publisher can easily reuse data fields, e.g. an
event's date, defined by other publishers, or create new ones
altogether. RDFa gets its expressive power from RDF, though the reader
need not understand RDF before reading this document. RDFa uses Compact
URIs, which express a URI using a prefix. More Information

DocBook V5.0: The Transition Guide

Jirka Kosek announced the availability of an updated "howto" DocBook
Version 5.0 Transition Guide. The document is targeted at DocBook users
who are considering switching from DocBook V4.x to DocBook V5.0. It
describes differences between DocBook V4.x and V5.0 and provides some
suggestions about how to edit and process DocBook V5.0 documents. There
is also a section devoted to conversion of legacy documents from DocBook
4.x to DocBook V5.0. The differences between DocBook V4.x and V5.0 are
quite radical in some aspects, but the basic idea behind DocBook is
still the same and almost all element names are unchanged. Because of
this it is very easy to become familiar with DocBook V5.0 if you know
any previous version of DocBook. For more than a decade, the DocBook
schema was defined using a DTD. However DTDs have serious limitations
and DocBook V5.0 is thus defined using a very powerful schema language
called RELAX NG. Thanks to RELAX NG, it is now much easier to create
customized versions of DocBook, and some content models are now cleaner
and more precise. The Technical Committee provides the DocBook 5.0
schema in other schema languages, including W3C XML Schema and an XML
DTD, but the RELAX NG Schema is the normative schema. All DocBook V5.0
elements are in the namespace Link XML
namespaces are used to distinguish between different element sets. In
the last few years, almost all new XML grammars have used their own
namespace. It is easy to create compound documents that contain elements
from different XML vocabularies. The namespace name serves only as an identifier. This
resource is not fetched during processing of DocBook documents and
you are not required to have an Internet connection during processing.
If you access the namespace URI with a browser, you will find a short
explanatory document about the namespace. In the future this document
will probably conform to (some version of) RDDL and provide pointers
to related resources. More Information See also DocBook V5.x: Click Here

Friday, October 26, 2007

The Trouble With XML Schema Imports and Includes

Those who have written W3C XML Schemas will know that there are two
mechanisms for sharing Schema definitions across files. "xs:include"
is used like traditional programming language 'include' statements,
so that you can split a single large file up into separate, modular
pieces. "xs:import" is used when you need to use Schema definitions
from a different XML namespace, as W3C XML Schema doesn't allow a
single Schema file to contain definitions for more than a single
namespace... I have been involved with a number of standards groups,
and I know that the what comes out of a standards effort depends on
what requirements and scope have been given to the group. So while
I wish that the W3C's XML Schema Working Group had been able to give
us something better for Schema definition sharing than just
import/include statements, I don't think the working group ever had
a scope that would have allowed them, for example, to define a
standard for repositories of Schema definitions, and for how to
compose and generate Schemas flexibly from the definitions in one or
more repositories. Instead, we have import and include, which are
about the best you can do when your scope only allows you to deal with
schemas, and not with higher-level concepts like repositories. As a
general rule, almost any solution will work if the problem is simple
enough and straightforward enough and isn't particularly demanding in
any way. Many uses of import/include statements are simple enough that
these built-in mechanisms do what is needed. However, there are
other situations where import/include don't work as you would hope,
and I thought I would mention a couple that I have run into in practice... More Information

Google Search Appliance Version 5.0 Features SAML-Based Security

Google Enterprise Labs announced the release of the Google Search
Appliance Version 5.0, featuring enhanced security for enterprise
applications. The Google Search Appliance provides document and
user-level access control across all web-enabled enterprise content
to ensure that users only see search results for documents they're
permitted to access. With version 5.0, the designers made significant
performance improvement to the SAML SPI framework; as a result the
customers who leverage SAML SPI will be improved performance on their
secured search queries. If the search appliance is configured to use
the SAML Authentication and Authorization SPI, the search appliance
sends a SAML authorization request to the Policy Decision Point, using
the identity obtained for the user during serve authentication. The
SPI enables a Google Search Appliance to communicate with an existing
access control infrastructure via standard SAML messages. The
Authorization SPI is also required in order to support X.509
certificate authentication during serve. When the user's identity
has been authenticated, the Authorization SPI checks to see whether
the user is authorized to view each of the secure documents that match
their search. Using the authenticated cookie set during Authentication,
the search appliance passes the user's session cookie to the Policy
Decision Point's Authorization Service URL inside a SAML Authorization
request. If the response from the Policy Decision Point is inconclusive,
the search appliance will also attempt to verify authorization with a
HEAD request (for content crawled via HTTP Basic or NTLM HTTP) or GET
request (for content crawled via Forms Authentication) before removing
the content from the search results list. The "Windows Authentication
via Google SAML Bridge for Windows" is a special case of the
Authentication and Authorization SPI. The search appliance sends SPI
messages to the Google SAML Bridge for Windows to verify the user's
credentials and authorization to view secure content. This method
requires you to set up the Google SAML Bridge for Windows to handle
the SAML messages from the search appliance's Authorization and
Authentication SPI. The Google SAML Bridge for Windows acts as an
Identity Provider and Policy Decision Point. More Information

CSS Snapshot 2007: W3C Working Draft

W3C's CSS Working Group recently published the First Public Working
Draft for "Cascading Style Sheets (CSS) Snapshot 2007." The document
collects together into one definition all the specs that together
form the current state of Cascading Style Sheets (CSS). All stable
specifications that have been implemented for the Cascading Style
Sheets (CSS) language at all Levels are given in this single document
as a guide for authors. The snapshot is not a guide to what features
are implemented. The group expects it to be a future Working Group Note.
When the first CSS specification was published, all of CSS was contained
in one document that defined CSS Level 1. CSS Level 2 was defined also
by a single, multi-chapter document. However for CSS beyond Level 2,
the CSS Working Group chose to adopt a modular approach, where each
module defines a part of CSS, rather than to define a single monolithic
specifcation. This breaks the specification into more manageable chunks
and allows more immediate, incremental improvement to CSS. Since different
CSS modules are at different levels of stability, the CSS Working Group
has chosen to publish this profile to define the current scope and
state of Cascading Style Sheets as of late 2007. The profile includes
only specifications that we consider consider stable and for which we
have enough implementation experience that we are sure of that stability.
The CSS Working Group considers the CSS1 specification to be obsolete.
CSS 2.1 is now a Candidate Recommendation -- effectively though not
officially the same level of stability as CSS2 -- and should be
considered to obsolete the CSS2 Recommendation. In case of any conflict
between the two specs CSS 2.1 contains the definitive definition. Features
in CSS2 that were dropped from CSS 2.1 should be considered to be at the
Candidate Recommendation stage, but note that many of these have been
or will be pulled into a CSS Level 3 working draft, in which case that
specification will, once it reaches CR, obsolete the definitions in CSS2.
CSS Level 3 builds on CSS Level 2 module by module, using the CSS 2.1
specification as its core. Each module adds functionality and/or replaces
part of the CSS 2.1 specification. The CSS Working Group intends that
the new CSS modules will not contradict the CSS 2.1 specification: only
that they will add functionality and refine definitions. As each module
is completed, it will be plugged in to the existing system of CSS 2.1
plus previously-completed modules. More Information See also the CSS Working Group: Click Here

Behavioral Extensions to CSS

W3C announced the release of an updated version of the "Behavioral
Extensions to CSS" Working Draft. The document was produced by members
of the W3C CSS (Cascading Style Sheets) Working Group as part of the
Style Activity. In 1999, the CSS working group worked on a "Behavioral
Extensions to CSS" specification that proposed syntax for actual
binding definitions. Since then, separate languages have been developed
for this purpose (e.g. XBL), and the CSS-specific way of defining
bindings has been dropped. CSS is still useful for connecting these
binding languages to actual elements, however. This specification
defines two features of CSS that can be used with any such binding
language. Behavioral Extensions provide a way to link to binding
technologies, such as XBL, from CSS style sheets. This allows bindings
to be selected using the CSS cascade, and thus enables bindings to
transparently benefit from the user style sheet mechanism, media
selection, and alternate style sheets. A "binding" is a definition of
presentation or behavior that can be attached to an element, and
bindings can be attached to elements through CSS using the 'binding'
property. Bindings attached through CSS must only remain on the bound
element as long as the element continues to match the style rule. If
at any time a resolution of style on the element determines that a
different binding should be attached, the old binding must be detached.
Whenever an element is removed from a document, any bindings attached
to that element via CSS must be detached. The ":bound-element"
pseudo-class, when used from a binding, must match the bound element
of that binding. If the selector is used in a context that is not
specific to a binding, then it must match any bound element. One
example shows an XBL binding that uses this pseudo-class to to draw
a border around each of the children of the bound element, but no
other elements. More Information

Setting Out for Service Component Architecture

Quite a number of bloggers have been wondering about the Service
Component Architecture (SCA) standardization effort. SCA's pick-and-chose
specification style makes it is easy to get lost in the SCA universe.
Because there is little experience with using SCA in the community,
many areas that deserve detailed specification are still under
investigation or have not even been touched yet. At first, readers
might easily be misled into believing that SCA is (yet another)
revolution in Java land. This is wrong on two counts. Firstly, although
Java oriented work attract most of the attention, SCA is not only about
Java land: there are specifications for C++, COBOL, PHP, and BPEL. What
we want to focus on though is that SCA is not primarily about replacing
existing environments (such as Java EE and OSGI) but about creating an
infrastructure in which applications can cross the boundaries between
different programming model in these environments. The details of how
SCA will integrate with existing technologies are the missing pieces
in the catalogue of published SCA specifications. There is simply still
a lot of work ahead to figure out the tedious details of integration at
all layers with these environments. Technology integration is hard. No
single interesting technology should be limited in its use. And yet,
SCA is all about cross-technology integration... SCA defines an assembly
language that may be integrated into such frameworks in order to realize
a number of benefits. We will discuss various benefits in detail. Here
are the claims we will make: (1) SCA can be supported in conjunction
with existing technologies. That will likely be its primary use-case.
(2) SCA's fundamental value lies in providing the foundation to
cross-technology programming model integration, distributed deployments
and assembly. (3) SCA will allow implementers to provide proprietary
technologies in a consistent and recognizable way -- which is good for
both developers and vendors.

Wednesday, October 24, 2007

W3C First Public Working Draft: RDFa in XHTML: Syntax and Processing

W3C announced that the Semantic Web Deployment Working Group and the
XHTML2 Working Group jointly have published the First Public Working
Draft for "RDFa in XHTML: Syntax and Processing." RDFa attributes can
be used with languages such as HTML and XHTML to express structured data.
RDFa allows terms from multiple independently-developed vocabularies to
be freely intermixed. This document has parsing rules for those creating
an RDFa parser as well as guidelines for users in organizations who wish
to use RDFa. For those who would like start using RDFa, the RDFa Primer
is an introduction to its use and shows real-world examples. RDFa
alleviates the pressure on XML format authors to anticipate all the
structural requirements users of their format might have, by outlining
a new syntax for RDF that relies only on XML attributes. This
specification deals specifically with the use of RDFa in XHTML, and
defines an RDF mapping for a number of XHTML attributes, but RDFa can
be easily imported into other XML-based markup languages. RDFa shares
some use cases with microformats. Whereas microformats specify both a
syntax for embedding structured data into HTML documents and a vocabulary
of specific terms for each microformat, RDFa specifies only a syntax
and relies on independent specification of terms (RDF Classes and
Properties) by others. RDFa allows terms from multiple
independently-developed vocabularies to be freely intermixed and is
designed such that the language can be parsed without knowledge of
the specific term vocabulary being used. More Information See also the RDFa Primer 1.0: Click Here

Burton Cautiously Optimistic about SCA for SOA

The analysts who cover service-oriented architecture (SOA) for Burton
Group Inc. have some reservations about the Service Component
Architecture (SCA) specification, but have concluded the vendor backing
is so strong "adoption may be inevitable." Touted as "a new programming
model for SOA" by its vendor sponsors led by IBM and now making its
way through the OASIS standards process, SCA is not yet baked into many
products beyond IBM's WebSphere, but Burton analysts expect adoption
to pick up in 2008. Given its apparent inevitability as a vendor
supported standard, Anne Thomas Manes, vice president and research
director at Burton, spent more than an hour Tuesday in a Web seminar
explaining SCA's potential promise and problems to clients. She said
the concerns about SCA at Burton Group include the fact that SCA is
made up of more than 14 specifications. Analysts are skeptical that
the various technical committees working on those specifications can
reach the goal of creating an overall standard to "simplify" service
creation and composition. This leads to concern that SCA could suffer
the same fate as Common Object Request Broker Architecture (CORBA),
which failed to achieve its promise in the 1990s because, as one analyst
put it, "too many cooks spoiled the broth." "There is some concern that
SCA can hide all the complexities," she said. The good news is that SCA
has potential, yet unproven, to be a language-and protocol-independent
programming model for SOA. Languages that will be supported in SCA cover
most of those a non-Microsoft coder would be working on today, ranging
from COBOL to Ruby. Support is planned for Java, including Plan Old Java
Objects (POJO), Spring, Enterprise Java Beans, C, C++, BPEL and PHP. More Information See also OASIS SCA-TCs: Click Here

W3C Workshop Report: Next Steps for XML Signature and XML Encryption

W3C's report of the Workshop on Next Steps for XML Signature and XML
Encryption is now available. The September 25-26, 2007 event was
hosted by VeriSign in Mountain View, California. This Workshop included
implementors and users of the XML Canonicalization, XML Signature and
XML Encryption suites of specifications. The participants included
implementers and specification writers that have built their work on
top of these specifications. Participants in the workshop had to submit
a position paper. The workshop had 25 participants from over fifteen
organizations. The aim of this workshop was to gather information and
experiences with these specifications, to start to build community
consensus, and to make initial recommendations to the W3C about possible
next steps for the XML Security specifications. The report shows strong
interest in additional work on XML security at W3C. A basic signature
profile, the referencing and transform models, updating the set of
supported cryptographic algorithms, and revisiting XML canonicalization
were seen as highest priority among the several topics identified by
the participants. The XML Security Specifications Maintenance Working
Group has been chartered to produce a draft charter for follow-up work.
This Workshop report will serve as input to that deliverable. To enable
discussion among Workshop attendees, Working Group members, and the
broader community, a new mailing list has been created. Participation
in that mailing list is open to all interested parties. More Information
See also the XML Security Specifications Maintenance WG: Click Here

XML Data Interchange in Java ME Applications

In this article the author shows how the Data Transfer Object design
pattern is implemented in Java ME architectures and why you might want
to use XML-based DTOs rather than Java beans for faster data interchange
between tiers. Author Mario La Menza presents an example architecture
that supports the use of XML-based DTOs and also introduces MockMe, a
free tool for generating XML-based mock objects so you can test the
presentation layer of your Java mobile applications without waiting for
server-side data. While many Java mobile application developers do go
the route of serializing DTOs, this approach is limited by the fact that
DTOs by definition have no logic, and Java ME does not support object
serialization. Without support for serialization it is not possible to
make an object exchange transparent. An alternative approach is to use
XML to encode the objects to be exchanged. In addition to the fact that
an object-XML-object mechanism doesn't differ much from object
serialization, it is readable by both computers and humans, unlike a
serialized object. Human readability simplifies the process of debugging
application code because generating different instances of the same
object is just a matter of editing an XML file. Furthermore, you can use
any browser to send a request to the server and observe its response.
Finally, using XML for data interchange means that your application can
interact with clients built using different technologies, not just Java
ME but also Java Standard Edition and .Net, for example. On the other
hand, the almost universally recognized downside of using XML for data
interchange is the inherent parsing processes and syntactic analysis
involved. Rather than spending a lot of time on this aspect of your code,
or being frustrated by Java ME's lack of support for XML, you can use
a small-memory XML parser. Examples later in this article are based on
the KXML library. More Information

MySQL to Get Injection of Google Code

MySQL has laid out its software road map through 2009, including some
code contributed by Google and security improvements that are due in
MySQL 7.0. Google is secretive about the distributed architecture
underlying its services, but it's known to be one of MySQL's biggest
users, running hundreds or even thousands of its databases worldwide.
The search company has done a lot of work customizing MySQL to meet
its special needs, which include better database replication, and tools
to monitor a high volume of database instances. MySQL 5.1 is scheduled
for general availability in the first quarter next year. Advances
include table and index partitioning, which should boost data warehousing
performance, and the option of row-based replication, which lets
companies create more exact back-up replicas. The big change in 6.0
will be the availability of MySQL's storage engine, Falcon. The most
popular storage engine for MySQL has historically been InnoDB, but two
years ago Oracle acquired InnoDB's developer, Innobase. Oracle continued
to license the software to MySQL, but MySQL wanted an alternative.
Falcon will do crash recovery and roll-back operations faster than
InnoDB because they are done from main memory, Schumacher said, but some
InnoDB features, like foreign key support and full-text indexing, won't
be supported until MySQL 6.1. 6.1 is due to go into beta in mid-2008
and start to ship widely in 2009. Improvements include better prepared
statements and server-side cursors, Schumacher said. Despite all the
buzz a few years ago about native XML (Extensible Markup Language)
support, Axmark said he's still waiting for a clear signal about what
customers want. Until then it's not a big priority for MySQL, although
there are some XML capabilities in 5.1. More Information

Make Ajax Development Easier With AjaxTags

Traditionally, Web-based user interfaces (including both pages and
applications) have required that each request made by the user forced
a page refresh, consuming a considerable amount of time and bandwidth.
Repeated page refreshes can result in a fairly slow and clunky Web
experience, even for users with the fastest broadband connection
available. These days, developers everywhere are flocking to new
tricks and techniques to drastically improve the performance and user
experience within Web-based applications. Web applications coded with
Ajax can allow data to be sent to the server asynchronously in the
background, while simultaneously updating various parts of the Web
page being viewed without a page reload. Ajax comprises a number of
objects and technologies. And despite the X initial in the acronym
Ajax, XML might never be used at all. The response sent back from the
browser could be one of various types or formats, including but not
limited to plain text, HTML, or XML. This article describes a compact
little JSP tag library that uses some external JavaScript to bring
easy-to-use Ajax support to your JSP pages: AjaxTags. More Information

Tuesday, October 23, 2007

SIP Interface to VoiceXML Media Services

Members of the IETF Media Server Control (MEDIACTRL) Working Group have
released an initial Internet Draft for "SIP Interface to VoiceXML Media
Services." Commonly, application servers control media servers use this
protocol for pure VoiceXML processing capabilities. This protocol is
an adjunct to the full MEDIACTRL protocol and packages mechanism.
VoiceXML 2.x is a World Wide Web Consortium (W3C) standard for creating
audio and video dialogs that feature synthesized speech, digitized audio,
recognition of spoken and DTMF key input, recording of audio and video,
telephony, and mixed initiative conversations. VoiceXML allows Web-based
development and content delivery paradigms to be used with interactive
video and voice response applications. The interface described here
leverages a mechanism for identifying dialog media services first
described in RFC 4240. The interface has been updated and extended to
support the W3C Recommendation for VoiceXML 2.0 and VoiceXML 2.1. A set
of commonly implemented functions and extensions have been specified
including VoiceXML dialog preparation, outbound calling, video media
support, and transfers. VoiceXML session variable mappings have been
defined for SIP with an extensible mechanism for passing
application-specific values into the VoiceXML application. Mechanisms
for returning data to the Application Server have also been added. Among
the use cases: CCXML/VoiceXML. CCXML 1.0 defines language elements that
allow for Dialogs to be prepared, started, and terminated; it further
allows for data to be returned by the dialog environment, for call
transfers to be requested (by the dialog) and responded to by the CCXML
application, and for arbitrary eventing between the CCXML application
and running dialog application. The interface described in this document
can be used by CCXML 1.0 implementations to control VoiceXML Media
Servers. More Information See also IETF MEDIACTRL) WG Charter: Click Here

The Future of File Systems: Jeff Bonwick and Bill Moore Explain ZFS

In this interview, ACM Queue speaks with two Sun engineers who are
bringing file systems into the 21st century. Jeff Bonwick, CTO for
storage at Sun, led development of the ZFS file system, which is now
part of Solaris. Bonwick and his co-lead, Sun Distinguished Engineer
Bill Moore, developed ZFS to address many of the problems they saw
with current file systems, such as data integrity, scalability, and
administration. Bonwick and Moore explain what makes ZFS such a big
leap forward. "One of the design principles we set for ZFS was: never,
ever trust the underlying hardware. As soon as an application generates
data, we generate a checksum for the data while we're still in the same
fault domain where the application generated the data, running on the
same CPU and the same memory subsystem. Then we store the data and the
checksum separately on disk so that a single failure cannot take them
both out. When we read the data back, we validate it against that
checksum and see if it's indeed what we think we wrote out before. If
it's not, we employ all sorts of recovery mechanisms. Because of that,
we can, on very cheap hardware, provide more reliable storage than you
could get with the most reliable external storage. It doesn't matter
how perfect your storage is, if the data gets corrupted in flight --
and we've actually seen many customer cases where this happens -- then
nothing you can do can recover from that. With ZFS, on the other hand,
we can actually authenticate that we got the right answer back and,
if not, enact a bunch of recovery scenarios. That's data integrity.
Another design goal we had was to simplify storage management. When you
're thinking about petabytes of data and hundreds, maybe even thousands
of disk drives, you're talking about something that no human would ever
willingly take care of. ZFS is composed of several layers,
architecturally, but the core of the whole thing is a transactional
object store. The bulk of ZFS, the bulk of the code, is providing a
transactional store of objects. You can have up to 264 objects, each
264 bytes in size, and you can perform arbitrary atomic transactions
on those objects. Moreover, a storage pool can have up to 264 sets of
these objects, each of which is a logically independent file system.
Given this foundation, a lot of the heavy lifting of writing a Posix
file system is already done for you..." More Information See also 'What is ZFS?': Click Here

OASIS EDXL Hospital AVailability Exchange (HAVE) Version 1.0

The EDXL Distribution Element (DE) specification describes a standard
message distribution framework for data sharing among emergency
information systems using the XML-based Emergency Data Exchange
Language (EDXL). This format may be used over any data transmission
system, including but not limited to the SOAP HTTP binding. EDXL-HAVE
specifies an EDXL-based XML document format that allows the
communication of the status of a hospital, its services, and its
resources. These include bed capacity and availability, emergency
department status, available service coverage, and the status of a
hospital's facility and operations. This format may be used over any
data transmission system, including but not limited to the SOAP HTTP
binding. In a disaster or emergency situation, there is a need for
hospitals to be able to communicate with each other, and with other
members of the emergency response community. The ability to exchange
data in regard to hospitals' bed availability, status, services, and
capacity enables both hospitals and other emergency agencies to
respond to emergencies and disaster situations with greater efficiency
and speed. In particular, it will allow emergency dispatchers and
managers to make sound logistics decisions -- where to route victims,
which hospitals have the ability to provide the needed service. Many
hospitals have expressed the need for, and indeed are currently using,
commercial or self-developed information technology that allows them
to publish this information to other hospitals in a region, as well
as EOCs, 9-1-1 centers, and EMS responders via a Web-based tool.
Systems that are available today do not record or present data in a
standardized format, creating a serious barrier to data sharing between
hospitals and emergency response groups. Without data standards,
parties of various kinds are unable to view data from hospitals in a
state or region that use a different system -- unless a specialized
interface is developed. More Information See also the OASIS Emergency Management TC: Click Here

Open Document Format v1.1 Accessibility Guidelines Version 1.0

Members of the OASIS Open Document Format for Office Applications
(OpenDocument) TC have approved a Committee Draft version of the
"Accessibility Guidelines" for public review. "Open Document Format
v1.1 Accessibility Guidelines Version 1.0" is a guide for Office
Applications that support version 1.1 of the OpenDocument format, to
promote and preserve accessible ODF documents. The public review
period ends 21-December-2007. From the Overview: "The Open Document
Format v1.1 (ODF) is capable of encoding and storing a lot of
structural and semantic information. This information is needed by
people with disabilities and the tools they use (assistive technologies),
to gain access to computers and information. This document provides
guidelines for ODF 1.1 implementation. A successful ODF 1.1
implementation will enable users with disabilities to read, create,
and edit documents -- with full access to all of the meaning and
intent -- just like a person without any disability. Accessibility
is about enabling people with disabilities to participate in substantial
life activities that include work and the use of services, products,
and information. In the context of ODF documents, this means that
people with disabilities should be able to participate fully in the
creation, review, and editing process of the documents. A blind person,
for example, should be able to work with a document someone else created
(by getting a text description of the images used). A person should be
able to fill out a form without using hands. A person with poor vision
should be able to read through presentation materials easily." More Information

Roadmap for Accessible Rich Internet Applications (WAI-ARIA)

W3C announced that the Protocols and Formats Working Group has published
updated Working Drafts of the WAI-ARIA Roadmap, WAI-ARIA Roles, and
WAI-ARIA States and Properties. The WAI-ARIA Suite of documents
addresses the accessibility to people with disabilities of dynamic Web
content built with Ajax and DHTML. WAI-ARIA includes technologies to
map controls, Ajax live regions, and events to accessibility APIs,
including custom controls used for rich Internet applications. It also
describes new navigation techniques to mark common Web structures as
menus, primary content, secondary content, banner information and other
types of Web structures. Implementation of WAI-ARIA in languages such
as HTML 4, HTML 5 and XHTML is in active development. According to a
SecuritySpace Technology Penetration Report, more than 55% of all Web
sites today contain JavaScript, dramatically affecting the ability for
persons with disabilities to access Web content. HTML and other markup
does not provide adequate markup to support accessible dynamic content.
A number of W3C initiatives are underway to address this problem using
a declarative markup approach. This roadmap is designed to create a
bridge to fix the interoperability problem with assistive technologies
now by incorporating the appropriate meta data in current XHTML markup
to support today's accessibility API. It will incorporate some of the
advanced accessibility features, originally designed, in technologies
like XHTML2. The intent of XHTML 2 is to make the enablement of Web
applications and documents easier for authors. This roadmap will create
cross-cutting technologies that XHTML authors may incorporate in their
markup to describe GUI widgets and document structures to aide assistive
technology vendors in providing accessible, usable solutions. The W3C
WAI PF working group will work with User Agent manufacturers and
assistive technology vendors to ensure a working solution. More Information

SOA Made Fast and Easy

Of the 11,000 largest enterprises worldwide, 95% are engaged in "some
type of effort to implement SOA," according to Susan Eustis, president
of WinterGreen Research: "Most of these projects have started out as
compliance efforts and have been extended to include a dashboard that
is used to manage the business. SOA starts out as a small trial
initiative before it is expanded." With so much work going on, the hype
around SOA has eroded. In its place are more than a few startling truths:
When it comes to SOA, the network is everything. Not every project is
SOA friendly. An often shockingly expensive initial SOA project will
pay for itself repeatedly over time, as other projects reuse the
stockpile of services -- provided you've made that reuse easy. Perhaps
the biggest school-of-hard-knocks lesson is when not to use a
services-approach. "SOA is not a means to an end. You need to use it
in the context of solving a business problem," says Steven Weiskircher,
vice president of IT for audio-electronics merchant Crutchfield, in
Charlottesville, Va. Crutchfield began an SOA pilot about two years
ago, when it upgraded its mission-critical catalog/call center, e-commerce
and retail order-taking applications, which are 90% custom code... When
the benefits of reusability are clear-cut, network executives are left
in a pickle. How can they make sure those bloated services fly across
the network, particularly as they scale horizontally and vertically?
Enter the XML appliance, which offloads processing of XML documents from
the server to a network device. Web-services standards specify use of
XML document headers that provide routing information, just as IP headers
do. Such functions are available in Cast Iron Systems' Application
Integration suite, Forum Systems' Vantage, Intel's XSLT Accelerator
(a software XML accelerator) and IBM's WebSphere DataPower, as well as
in Cisco's Application Oriented Networking (AON) line, via technology
the company gained in February, when it acquired Reactivity. When a Web
services-based SOA is coupled with an XML appliance, routing moves up
the stack. The packet becomes the message. The message is part of the
workflow that executes the business logic. Most agree and comply with
basic, well-proven Web-services standards [for SOA]. These include
WS-Security, SOAP and Universal Description, Discovery, and Integration.
In addition, the industry has several alphabets' worth of acronyms in
the works as standards or accepted widely. These include Java API for
XML-based Web Services, Business Process Execution Language, WS-Reliable
Messaging, WS-Addressing, SOAP with Attachments, Message Transmission
Optimization Mechanism and WS-Policy. More Information

CSS Mobile Profile 2.0

Members of the W3C CSS Working Group have released a Last Call Working
Draft for "CSS Mobile Profile 2.0," updating the previous draft of
2006-12-08. Comments are welcome through 15-November-2007. This subset
of Cascading Style Sheets (CSS) 2.1 is a baseline for implementations
of CSS on constrained devices like mobile phones, written with WICD
Mobile 1.0 to ensure interoperability and for alignment with OMA's
Wireless CSS Specification 1.1. The specification's intent is not to
produce a profile of CSS incompatible with the complete specification,
but rather to ensure that implementations that due to platform limitations
cannot support the entire specification implement a common subset that
is interoperable not only amongst constrained implementations but also
with complete ones. Since the goal of the specification is to define
a baseline interoperability level, user agents MAY accept CSS documents
conforming to CSS 2.1 or subsequent revisions of the CSS family of
Recommendations. Document sections 3 (Selectors), 4 (At-rules), and
5 (Properties) clarify features that are required or optional for
conformance. More Information See also the W3C Style Activity Statement: Click Here

Friday, October 19, 2007

Update XML in DB2 9.5

This article discussed the W3C "XQuery Update Facility" specification
in the context of IBM DB2 9.5. The XQuery Update Facility extends the
XML Query language, XQuery. The XQuery Update Facility provides
expressions that can be used to make persistent changes to instances
of the XQuery 1.0 and XPath 2.0 Data Model. It provides facilities to
perform any or all of the following operations on an XDM instance:
insertion of a node, deletion of a node, modification of a node by
changing some of its properties while preserving its identity, and
creation of a modified copy of a node with a new identity. One of the
most significant new features in IBM DB2 9.5 for Linux, Unix and
Windows is the XML update functionality. The previous version, DB2 9,
introduced pureXML support for storing and indexing of XML data and
querying it with the SQL/XML and XQuery languages. Modifications to an
XML document were performed outside of the database server followed
by an update of the full document in DB2. Now DB2 9.5 introduces the
XQuery Update Facility, a standardized extension to XQuery that allows
you to modify, insert, or delete individual elements and attributes
within an XML document. This makes updating XML data easier and
provides higher performance. When DB2 9.5 executes the UPDATE statement,
it locates the relevant document(s) and modifies the specified elements
or attributes. This happens within the DB2 storage layer, that is the
document stays in DB2's internal hierarchical XML format the entire
time, without any parsing or serialization. Concurrency control and
logging happens on the level of full documents. Overall, this new
update process can often be 2x to 4x faster than the [DB2 9 pureXML]
process. This article describes how to perform such XML updates with
XQuery transform expressions. You'll see how to embed a transform in
UPDATE statements to permanently change data on disk, and in queries,
to modify XML data "on the fly" while reading it out without permanently
changing it. The latter can be useful if applications need to receive
an XML format that's different from the one in the database. More Information See also IBM Systems Journal: Click Here

Why Microsoft Should Not Support SCA

Will Microsoft support Service Component Architecture (SCA)? It seems
unlikely... First, it's important to understand that SCA is purely
about portability -- it has nothing to do with interoperability. To
connect applications across vendor boundaries, SCA relies on standard
Web services, adding nothing extra. This is an important point, but
it's often lost (or misunderstood) in SCA discussions. Because some
of SCA's supporters describe it as a standard for SOA, people assume
it somehow enhances interoperability between products from different
vendors. This just isn't true, and so Microsoft not supporting SCA
will in no way affect anyone's ability to connect applications running
on different vendor platforms. But what about portability? Just as
the various Java EE specs have allowed some portability of code and
developer skills, SCA promises the same thing. Wouldn't Microsoft
supporting SCA help here? The answer is yes, but only a little. To
explain why, it's useful to look separately at the two main things
SCA defines: programming models for creating components in various
languages and an XML-based language for defining composites from
groups of these components... While some SCA skills portability will
occur -- at least everybody will be describing components and
composites using the same terms -- I'm doubtful that SCA will do much
to help move applications from one vendor's SCA product to another.
Put another way, don't look to SCA to play a big role in reducing
vendor lock-in... More Information See also the OASIS SCA TCs: Click Here

The Search Engine Unfriendliness of Web 2.0

Wouldn't it be great if all those whiz-bang Web 2.0 interactive elements
based on AJAX (Asynchronous JavaScript and XML) and Flash -- such as
widgets and gadgets and Google Maps mashups -- were search engine
optimal? Unfortunately, that's not the case. In fact, these
technologies are inherently unfriendly to search engine spiders. So,
if you intend to harness Web 2.0 technologies for wider syndication,
increased conversion, improved usability and greater customer engagement,
you'd better read on or you'll end up missing the boat when it comes
to better search engine rankings. When it comes to AJAX and Flash, the
onus is on you to render them search engine friendly. The major search
engines just can't cope with these Web 2.0 technologies very well at
all. Some search engines, including Google, have rudimentary means of
extracting content and links from Flash. Nonetheless, any content or
navigation embedded within Flash will, at best, rank poorly in
comparison to a static, HTML-based counterpart, and at worst, not
even make it into the search engine's index. Google's view on Flash
is that it doesn't provide a user-friendly experience. Flash is wholly
inaccessible to the vision-impaired, unrenderable on devices such as
mobile phones and PDAs, and can't be accessed without broadband
connectivity. In particular, Google frowns on navigational elements
presented exclusively in Flash. Given this stance, Google isn't likely
to make big improvements on how it crawls, indexes and ranks Flash
files anytime soon. So, it's in your hands to either replace those
Flash elements with a more accessible alternative like CSS/DHTML or
to employ a Web design approach known as "progressive enhancement...
AJAX poses similar problems to spiders as Flash does because AJAX
also relies on JavaScript. Search engine spiders can't execute
JavaScript commands. AJAX can be used to pull data seamlessly in
the background onto an already loaded Web page, sparing the user
from the "click-and-wait" frustrations associated with more
conventional Web sites, but the additional content that's pulled in
via AJAX is invisible to the spiders unless it's preloaded into the
page's HTML and simply hidden from the user via CSS. Here, progressive
enhancement renders a non-JavaScript version of the AJAX application
for spiders and JavaScript-incapable browsers. A low-tech alternative
to progressive enhancement is to place an HTML version of your AJAX
application within noscript tags.

Knowledge Services on the Semantic Web

In this article we present a Semantic Web-enabled architecture for
trading knowledge assets. The most suitable environment for
technologically supporting Web-enabled knowledge provision services
is the use of Semantic Web services. In this area, we should note the
recent work of the Semantic Annotations for WSDL (SAWSDL) Working
Group of the W3C, whose objective is to develop a mechanism to enable
semantic annotation of Web services descriptions. In our work we
developed multifaceted ontological structures in order to define the
necessary modeling primitives that are important for describing
knowledge provision services that go beyond common Web services like
a flight booking or book selling. The knowledge service utilizes the
content and context ontology for a twofold purpose: to discover
knowledge objects within a collection and to be discovered as a service,
namely to determine its identity. We have specified an enhanced
universal discovery, description, and integration (UDDI) platform
known as k-UDDI, which enables the discovery, negotiation, and
invocation of knowledge services with the incorporation incorporation
of reference ontologies that semantically enrich the Web services
infrastructure. The k-UDDI holds all reference ontologies that allow
a common understanding of services and facilitate semantically enhanced
service discovery, IPR and business specific issues and finally
negotiation processes generating sound contracts. Knowledge service
discovery is provided by the discovery service of the registry, which
is exposed via a Web service interface. As knowledge services will be
traded, mechanisms are needed to support negotiation and contracting
tasks. We make use of our negotiation ontology and develop a flexible
negotiation mechanism that enables bargaining between the service
provider and requester concerning the terms and conditions of use of a
knowledge service. [Also published in CACM 50/10 (October 2007), 53-58.]

Semantic Web Visions: A Tale of Two Studies

Professor Jorge Cardoso of the University of Madeira, Portugal, has
written a very interesting paper titled "The Semantic Web Vision: Where
are We?" Cardoso defines the Semantic Web as "a machine-readable World
Wide Web" and he notes "a significant evolution of standards as
improvements and innovations allow the delivery of more complex, more
sophisticated, and more far-reaching semantic applications." Cardoso
posted to a variety of technical e-mail lists to solicit survey
responses and sent 40 personal invitations. Two-thirds of the 627
responses came from academics and 18% from industry with 16% of
respondents working in both academia and industry. He asked survey
participants to report their use of ontology editors, ontology
languages, and reasoning engines, software applications that derive
new facts or associations from existing information. Refer to his paper
for findings. Over 50% of respondents reported using ontologies for
either or both of two purposes: to share common understanding of the
structure of information among people or software agents (69.9%) and
to enable reuse of domain knowledge (56.3%). These are knowledge
management functions, stepping-stones on the path to the vision of
autonomous software agents negotiating the Web that Tim Berners-Lee
first articulated over ten years ago. Only 12.4% of answers indicated
use of ontologies for purposes that are, perhaps, closer to
actualization of that vision, for "code generation, data integration,
data publication and exchange, document annotation, information
retrieval, search, reasoning, annotating experiments, building common
vocabularies, Web service discovery or mediation, and enabling
interoperability." Nonetheless, Cardoso concludes that "70% of people
working on the Semantic Web are committed to deploying real-world
systems that will go into production in less than 2 years."

Semantic Web Visions: A Tale of Two Studies

Professor Jorge Cardoso of the University of Madeira, Portugal, has
written a very interesting paper titled "The Semantic Web Vision: Where
are We?" Cardoso defines the Semantic Web as "a machine-readable World
Wide Web" and he notes "a significant evolution of standards as
improvements and innovations allow the delivery of more complex, more
sophisticated, and more far-reaching semantic applications." Cardoso
posted to a variety of technical e-mail lists to solicit survey
responses and sent 40 personal invitations. Two-thirds of the 627
responses came from academics and 18% from industry with 16% of
respondents working in both academia and industry. He asked survey
participants to report their use of ontology editors, ontology
languages, and reasoning engines, software applications that derive
new facts or associations from existing information. Refer to his paper
for findings. Over 50% of respondents reported using ontologies for
either or both of two purposes: to share common understanding of the
structure of information among people or software agents (69.9%) and
to enable reuse of domain knowledge (56.3%). These are knowledge
management functions, stepping-stones on the path to the vision of
autonomous software agents negotiating the Web that Tim Berners-Lee
first articulated over ten years ago. Only 12.4% of answers indicated
use of ontologies for purposes that are, perhaps, closer to
actualization of that vision, for "code generation, data integration,
data publication and exchange, document annotation, information
retrieval, search, reasoning, annotating experiments, building common
vocabularies, Web service discovery or mediation, and enabling
interoperability." Nonetheless, Cardoso concludes that "70% of people
working on the Semantic Web are committed to deploying real-world
systems that will go into production in less than 2 years."

Semantic Web Services, Part 1

Semantic Web services (SWS) has been a vigorous technology research
area for about six years. A great deal of innovative work has been done,
and a great deal remains. Several large research initiatives have been
producing substantial bodies of technology, which are gradually maturing.
SOA vendors are looking seriously at semantic technologies and have
made initial commitments to supporting selected approaches. In the
world of standards, numerous activities have reflected the strong
interest in this work. Perhaps the most visible of these is SAWSDL
(Semantic Annotations for WSDL and XML Schema). SAWSDL recently
achieved Recommendation status at the World Wide Web Consortium.
SAWSDL's completion provides a fitting opportunity to reflect on the
state of the art and practice in SWS -- past, present, and future.
This two-part installment of 'Trends & Controversies' discusses what
has been accomplished in SWS, what value SWS can ultimately provide,
and where we can go from here to reap these technologies' benefits.
The essays in this issue effectively define service technology needs
from a long-term industry perspective. Brodie starts by recognizing
that, although industry has embraced services as the way forward on
some of its most pressing problems, SOA is a framework for integration
rather than the solution for integration. He outlines the contributions
that are needed from semantic technologies and the implications for
computing beyond services. Leymann emphasizes the broad scope of
service-related technical requirements that must be addressed before
SWS can effectively meet businesses' IT needs and semantically enabled
SOA can be regarded as an enterprise solution rather than a mere
packaging of applications. He argues that a great deal remains to be
done in several important areas. More Information See also W3C SAWSDL Click Here

Revised Civic Location Format for PIDF-LO

Members of the IETF Geographic Location/Privacy (GEOPRIV) Working Group
have released an updated version of "Revised Civic Location Format for
PIDF-LO." The work was produced within the IETF Real-time Applications
and Infrastructure Area. RFC 4119 "A Presence-based GEOPRIV Location
Object Format" defines a location object which extends the XML-based
Presence Information Data Format (PIDF), designed for communicating
privacy-sensitive presence information and which has similar properties.
RFC 4776 "Dynamic Host Configuration Protocol (DHCPv4 and DHCPv6)
Option for Civic Addresses Configuration Information" further defines
information about the country, administrative units such as states,
provinces, and cities, as well as street addresses, postal community
names, and building information. The option allows multiple renditions
of the same address in different scripts and languages. This document
("Revised Civic Location Format for PIDF-LO") augments the GEOPRIV civic
form to include the additional civic parameters captured in RFC 4776.
The document also introduces a hierarchical structure for thoroughfare
(road) identification which is employed in some countries. New elements
are defined to allow for even more precision in specifying a civic
location. The XML schema (Section 4, 'Civic Address Schema') defined
for civic addresses allows for the addition of the "xml:lang" attribute
to all elements except "country" and "PLC", which both contain
language-neutral values. The IETF GEOPRIV Working Group was chartered
to assess the authorization, integrity and privacy requirements that
must be met in order to transfer [location] information, or authorize
the release or representation of such information through an agent. As
more and more resources become available on the Internet, some
applications need to acquire geographic location information about
certain resources or entities. These applications include navigation,
emergency services, management of equipment in the field, and other
location-based services. But while the formatting and transfer of such
information is in some sense a straightforward process, the implications
of doing it, especially in regards to privacy and security, are
[underspecified]. Also in scope: authorization of requestors and
responders; authorization of proxies (for instance, the ability to
authorize a carrier to reveal what timezone one is in, but not what
city; an approach to the taxonomy of requestors, as well as to the
resolution or precision of information given them. More Information

Thursday, October 18, 2007

ebXML Messaging Services 3.0 Approved as an OASIS Standard

OASIS announced that its members have approved the "ebXML Messaging
Services (ebMS) version 3.0: Part 1, Core Features" specification as
an OASIS Standard. ebMS 3.0 defines a Web services-based method for
the reliable, secure exchange of business information. It is the latest
addition to the ebXML family of specifications that was launched as a
global initiative by OASIS and the United Nations Centre for Trade
Facilitation and Electronic Business (UN/CEFACT) and has been adopted
worldwide. ebMS is designed to be used either with or without any of
the other ebXML standards, including ebXML Business Process Specification
Schema (BPSS) 2.0.4 and a forthcoming version of ebXML Collaboration
Protocol Profile and Agreement (CPP/A). By design, ebMS 3.0 also fully
supports composition with other SOAP-based Web services specifications.
ebMS was developed under the Royalty-Free on Limited Terms Mode of the
OASIS Intellectual Property Rights Policy. Axway, Fujitsu Computer
Systems, and NEC all verified successful use of ebMS 3.0, in accordance
with eligibility requirements for all OASIS Standards. The OASIS ebMS
Technical Committee continues work on Part 2 of ebMS 3.0 that will
provide functional extensions to the ebMS 3.0 Core. Participation in
the Technical Committee remains open to all companies, non-profit groups,
governments, academic institutions, and individuals. More Information See also the specification: Click Here

W3C First Public Working Draft: Language Bindings for DOM Specifications

W3C announced that members of the Web API Working Group have released
the First Public Working Draft for "Language Bindings for DOM
Specifications." The document was produced as part of the Rich Web
Clients Activity in the W3C Interaction Domain. The specification
defines an Interface Definition Language (IDL) to be used by
specifications that define a Document Object Model (DOM). How interfaces
described with this IDL correspond to constructs within ECMAScript and
Java execution environments is also detailed. It is intended to specify
in detail the IDL language used by W3C specifications to define DOM
interfaces, and to provide precise conformance requirements for
ECMAScript and Java bindings of such interfaces. It is expected that
this document acts as a guide to implementors of already-published DOM
specifications, and that newly published DOM specifications reference
this document to ensure conforming implementations of DOM interfaces
are interoperable. The interface definition language (defined in a
language independent manner) is based on the Object Management Group's
Interface Definition Language, and is syntactically a subset thereof.
The W3C Web API Working Group was chartered to develop specifications
that enable improved client-side application development on the Web.
This includes the development of programming interfaces to be made
available in a Web client. The target platforms for this Working Group
includes desktop and mobile browsers as well as many specialty,
browser-like environments that use Web client technologies. The goal
is to promote universal access both for devices and users, including
those with special needs. Additionally, the Working Group has the goal
to improve client-side application development through education,
outreach and interoperability testing. More Information

OOXML Payback Time as Global Standards Work in SC 34 "Grinds to a Halt"

As you will recall, Microsoft's OOXML submission to ISO/IEC via Ecma did
not garner enough votes to obtain approval in the first round of voting,
which closed on September 2. As you may also recall, in the run up to
that vote there were many sudden increases in membership not only in
national standards bodies, but in SC 34, the ISO/IEC JTC1 committee where
the national votes were cast. As part of the same trend, eleven countries
upgraded their membership from Observer to Participating status in SC 34,
in order to secure the greater influence over the final vote that could be
gained as "P" members. The great majority of those upgrading companies
voted to approve OOXML, but this influx was still insufficient to carry
the day. Many felt that these events damaged the integrity of the standards
process. It now appears that the damage is extending beyond reputation,
and is affecting the ability of the standards process to function at all.
Due to the fact that these newly minted "P" members have not participated
in any of the voting required by SC 34 members other than the OOXML vote,
the work of this very important committee, in the words of its chair, has
"ground to a halt." In fact, not a single vote has achieved sufficient
participation to pass - other than the OOXML vote - since the new members
arrived. More Information See also the SC 34 Secretariat Manager's Report: Click Here