Search This Blog

Friday, November 30, 2007

XProc: An XML Pipeline Language

W3C has announced the release of a new version of the Working Draft for
"XProc: An XML Pipeline Language." This document was produced by the
XML Processing Model Working Group which is part of the XML Activity.
In response to comments made on the previous draft, the Working Group
decided to make significant changes to the way XPath and XSLT are
supported in XProc. In particular, the requirement to support XPath 1.0
as XProc's expression language has been relaxed and the two XSLT steps
have been combined into a single step. The Working Group has not finished
addressing all of the outstanding comments on its previous draft but
feels that the XPath change in particular has such a pervasive impact
on the language that it has decided to publish a new draft immediately
in order to expose this decision. User and implementor feedback on this
decision would be most valuable. Norm Walsh writes in his blog
commentary: "The decision to wrap both versions of XSLT up into a
single step makes the signature for the step a little odd in the XSLT
1.0 case, but workable. Pipeline authors can choose the version they
want, implementors can choose the version automagically if authors don't.
Implicit pipeline inputs and outputs were designed to make very simple,
straight through pipelines as short as possible (syntactically). But
they added significant complexity to the analysis of pipelines that
call pipelines. So now we require all the inputs and outputs of a
'p:pipeline' [element] to be explicit." XProc defines a language for
describing operations to be performed on XML documents. An XML Pipeline
specifies a sequence of operations to be performed on one or more XML
documents. Pipelines generally accept one or more XML documents as input
and produce one or more XML documents as output. Pipelines are made up
of simple steps which perform atomic operations on XML documents and
constructs similar to conditionals, loops and exception handlers which
control which steps are executed. More Information See also the blog by Norm Walsh: Click Here

CDF: The Common Format You've Never Heard Of

XHTML, CSS 2.1, XMLHttpRequest, AJAX, XForms, SVG, XSLT, XPath, XSL-FO:
The Compound Document Format was set up as a way of tying together at
a minimum all of those technologies described above into a single
cohesive whole. Put another way, it's a fancy way of describing the
core suite of W3C document standards into a cohesive whole, although
it does place some fairly minor requirements on usage in order to provide
a consistent standard. CDF was in the news recently with the implosion
of the Open Document Foundation, originally established to endorse ODF,
though in its death throes it briefly highlighted the CDF format as
perhaps a better format for documents than either OOXML or ODF. The
effort of the CDF working group has been to essentially standardize
on the way that web documents can be bound together into what appears
to be a cohesive whole. Part of this is accomplished through the use
of a standard called the Web Integration Compound Document (or WICD).
Already, much of CDR has been implemented in the more sophisticatedly
forward browsers. Opera 9.5 has a rather extensive support for most of
CDF core and Firefox 3.0 is moving in that direction, though the
biggest area of weakness is in SVG animation support. JustSystems, a
company that has a huge presence in Japan [is] beginning to make an
impact outside of that country, has been working towards a CDF platform
for a number of years, and has one of the more expressive (and impressive)
displays of how compound documents COULD work... Both Sony and Nokia
have WICD implementations working (as prototypes) on certain of their
mobile phone chipsets, with similar announcements from Abbra Vidualize
and BitFlash, both makers of mobile graphical chipsets, while Sun is
partnering with OpenWave to create a formal WICD implementation in
line with "JSR 290: Java Language and XML User Interface Markup
Integration." More Information

Display Google Calendar Events on your PHP Web Site with XPath

In this article the author explains how XPath and SimpleXML provide
the right balance between readability and verbosity in XML-parsing
APIs. Google Calendar and other online calendaring applications
provide simple centralized systems where online communities can
maintain event calendars and community members can get information
about upcoming events. But many organizations prefer to display event
calendars on their community portals, forums, or blogs. They often
copy event calendar information from online calendaring applications
onto their Web sites, reducing the effectiveness of centrally managing
events online. Google Calendar provides an integration application
program interface (API) that provides a good solution to this problem.
The Google data API provides Atom feeds and the Atom Publishing
Protocol for retrieving, querying, updating, and creating events and
other information using Google Calendar and almost all the other
Google applications. There are also third-party integration APIs for
Microsoft .NET, the Java programming language, Python, and PHP that
encapsulate much of the Google data API functionality in a set of
object-oriented wrapper classes. Using XPath, you can automatically
keep a Web site's display of upcoming events up to date by querying
the Google data API event feeds and parsing its entries for relevant
details among the entries' elements. While XPath is not the fastest
XML API in the PHP toolkit, it is among the easiest to use when you
have a well-documented XML document on hand. You can use caching to
reduce the impact of XPath's relatively slow performance. More Information

IETF Recharters Network Configuration (NETCONF) Working Group

The IESG Secretary announced that the IETF Network Configuration (NETCONF)
Working Group in the Operations and Management Area of the IETF has been
rechartered. The NETCONF Working Group has been chartered to produce a
protocol suitable for network configuration. Background: "Configuration
of networks of devices has become a critical requirement for operators in
today's highly interoperable networks. Operators from large to small have
developed their own mechanisms or used vendor specific mechanisms to
transfer configuration data to and from a device, and for examining device
state information which may impact the configuration. Each of these
mechanisms may be different in various aspects, such as session
establishment, user authentication, configuration data exchange, and
error responses." The NETCONF protocol will use XML for data encoding
purposes, because XML is a widely deployed standard which is supported
by a large number of applications. XML also supports hierarchical data
structures. The NETCONF protocol should be independent of the data
definition language and data models used to describe configuration and
state data. However, the authorization model used in the protocol is
dependent on the data model. Although these issues must be fully
addressed to develop standard data models, only a small part of this
work will be initially addressed. This group will specify requirements
for standard data models in order to fully support the NETCONF protocol,
such as: (1) identification of principals, such as user names or
distinguished names; (2) mechanism to distinguish configuration from
non-configuration data; (3) XML namespace conventions; (4) XML usage
guidelines. Currently the NETCONF protocol is able to advertise which
protocol features are supported on a particular netconf-capable device.
However, there is currently no way to discover which XML Schema are
supported on the device. The NETCONF working group will produce a
standards-track RFC with mechanisms making this discovery possible.

Wednesday, November 28, 2007

Manage RSS Feeds With the Rome API

RSS (Really Simple Syndication) is an established way of publishing
short snippets of information, such as news headlines, project releases,
or blog entries. Modern browsers such as Firefox, and more recently
Internet Explorer 7, and mail clients such as Thunderbird recognize
and support RSS feeds; not to mention the a large number of dedicated
RSS readers (aggregators) out there. The large number of individual
formats (at least six flavors of RSS plus Atom) can make it difficult
to manipulate the feeds by hand, however. RSS feeds aren't just for
end-users, though. A variety of application scenarios could require
you to read, publish, or process RSS feeds from within your code. Your
application could need to publish information through a set of RSS
feeds, or need to read, and possibly manipulate, RSS data from another
source. For example, some applications use RSS feeds to inform users
of changes to the application database that could affect them. RSS
feeds also can be useful inside of a development project. Tools like
Trac let you use RSS feeds to monitor changes made to a Subversion
repository or to a project Web site, which can be a good way to easily
keep tabs on the status of many projects simultaneously. Some
development projects, for instance, use RSS feeds to monitor continuous
integration build results. End users simply subscribe to the CI server's
feed in their RSS reader or RSS-enabled Web browser. The server
publishes real-time build results in RSS format, which the client can
then consult at any time without having to go to the server's Web site.
In this article the author shows how to manipulate RSS feeds in Java
using the Rome (RSS and Atom utilities) API. He also develops a concrete
application of these techniques, writing a simple class that publishes
build results from a Continuum build server in RSS format. More Information

Model-driven SOA Emerges

The combination of business process management (BPM) with
service-oriented architecture (SOA) is driving modeling for application
development, according to Steve Hendrick, group vice president of
application development research at Independent Data Corp. (IDC). As
enterprise IT looks for a more structured and consistent way of building
applications so that it can get the SOA benefits of Web services reuse,
modeling from the high level business requirements to the nitty-gritty
processes provides a way to do that, Hendrick said. But it is a trend
that most analysts, himself included, did not expect to emerge so
quickly. Back in the day, developers pretty much stuck with gathering
requirements, which usually ended up gathering dust on a shelf, and
then got down to coding applications. With the adoption of SOA and BPM
and attendant technologies, including business process modeling notation
(BPMN) and business process execution language (BPEL), that approach
is going over the application development waterfall in a barrel.

CURIE Syntax 1.0: A Syntax for Expressing Compact URIs

W3C announced the publication of an updated version of "CURIE Syntax
1.0." The document was produced by members of the W3C XHTML 2 Working
Group as part of the HTML Activity. Originally this document was based
upon work done in the definition of XHTML2, and work done by the
RDF-in-HTML task force, a joint task force of the Semantic Web Best
Practices and Deployment Working Group and XHTML 2 Working Group. It
is not yet stable, but has had extensive review and some use in other
W3C documents. It is being released in a separate, stand-alone
specification in order to speed its adoption and facilitiate its use
in various specifications. The aim of the document is to outline a
syntax for expressing URIs in a generic, abbreviated syntax. While it
has been produced in conjunction with the HTML Working Group, it is not
specifically targeted at use by XHTML Family Markup Languages. The
target audience for this document is Language designers, not the users
of those Languages. More and more languages are expressing URIs in XML
using QNames. Since QNames are invariably shorter than the URI that
they express, this is obviously a very useful device. The definition
of a QName insists on the use of valid XML element names, but an
increasingly common use of QNames is as a means to abbreviate URIs,
and unfortunately the two are in conflict with each other. This
specification addresses the problem by creating a new data type whose
purpose is specifically to allow for the abbreviation of URIs in
exactly this way. This type is called a "CURIE" or a "Compact URI",
and QNames are a subset of this. CURIEs can be used in exactly the
same way that QNames have been used in attribute values, with the
modification that the format of the strings after the colon are
looser. In all cases a parsed CURIE will produce an IRI. However,
the process of parsing involves substituting the value represented
by the prefix for the prefix itself, and then simply appending the
part after the colon. More Information

Use Custom Collations in XSLT 2.0

One emphasis of XSLT 2.0 is better support for internationalization,
especially sorting and comparing text. This seemingly simple task is
quite complicated in some languages; for example, accented characters
can be considered the same or different depending on context. Are A+acute,
A+grave, and A the same letter? Sometimes the answer needs to be yes,
despite the fact that they are three different code points. The simple
string comparison functions found in most languages, including XSLT 1.0,
aren't up to the task. This article demonstrates how to write a custom
collation function using XSLT extensions and invoke it from an XSLT 2.0
stylesheet with the open-source Saxon processor. To use a custom
collation with Saxon, you specify the name of the Java class that
implements the collation function. XSLT 2.0 has a number of functions
and elements that allow you to specify a collation. A collation is the
heart of any sorting algorithm. A collation function compares two items
and returns one of three values. If the first item appears before the
second, the function returns a value less than zero. If the two items
are equal, the function returns zero. Finally, as you might expect, if
the first item appears after the second, the return value is greater
than zero... More Information See also the Saxon web site: Click Here

IBM Updates Free Enterprise Search Tool

IBM and Yahoo issued a new version of their free enterprise search
product on Tuesday, just weeks after rival Microsoft announced a
competing product. The latest release of IBM's OmniFind Yahoo Edition
contains a number of enhancements. Users can now generate up to five
separate indexes of documents, thereby enabling them to search from a
particular set instead of the entire repository. Other tweaks include
the ability to define additional custom search fields, according to
Aaron Brown, IBM's director of search and content discovery. IBM also
said the software is now easier to install as a Windows service.
OmniFind Yahoo Edition is based on the open source Lucene project. The
update includes the latest version of the Lucene core, according to
Brown: "It helps us close the loop with the community, because we've
contributed a lot of IBM code back into Lucene." However, the update
does not include any scalability improvements, and remains limited to
searching 500,000 documents per instance. Brown said the updates were
primarily guided by feedback from customers. The software has been
downloaded about 25,000 times, according to IBM. Yahoo and IBM released
the first version of the search engine about one year ago. From the
product description: "Open and extensible: (1) Built on Apache Lucene;
(2) Open URL-based APIs (REST); (3) Define, populate and search your
own custom fields; (4) Easily embeddable and customizable UI output
using XML/XSTL/HTML, HTML snippets." The Blog: "Also new in this
release is custom extensible metadata fields. This means you can define
your own fields in the index. Populate them via HTML meta tags,
extracted document meta-data, or directly through the push API, and
then search your custom fields. Not everyone needs this capability but
those that do need it need it badly and we've seen users jump through
incredible hoops to hack this capability into the fixed meta-data
search support we offered previously." More Information

Tuesday, November 27, 2007

GNOME Foundation Defends OOXML Involvement

The GNOME Foundation, recently slammed by critics who accused it of
supporting Microsoft's OOXML (Open Office XML document format), has
issued a statement to clarify its position on the matter. The
International Standards Organization recently shot down Microsoft's
request that OOXML be given "fast track" status. Another vote is expected
next year. In the meantime, Microsoft has been working with the ECMA
TC45 committee to address concerns over OOXML, which critics have argued
is too proprietary to merit certification as a standard. The
organization's statement seeks to answer such charges. Jody Goldberg,
lead maintainer for the GNOME-backed Gnumeric spreadsheet program, has
worked with ECMA TC45. "The GNOME Foundation's support for Jody's
participation in TC45-M does not indicate endorsement for, or contribution
to, ISO standardisation of the Microsoft Office Open XML formats,"
[the statement] reads. The group also argues that neither OOXML nor
ODF will serve all needs and that the development of standards overall
could be in jeopardy: "We are deeply concerned that abuse of the
standards process is eroding public trust in the value and independence
of international standards. Both ODF and OOXML are very heavily
influenced by their implementation heritage, neither are likely to
deliver the "one true office format," and both communities have -- in
their own way -- played a role in this erosion of trust." More Information

GSA Signs On With SAML

The government's push toward E-Authentication and federated identity
management has given a boost to the Security Assertion Markup Language
(SAML). Federal program managers say the government's pioneering
interoperability testing program for the E-Authentication Federated
Identity and Authentication Initiative has helped drive standard
implementations of the protocol in identity management products. The
E-Authentication program, established in 2002, was using SAML 1.0 as
the protocol for user authentication when it first went live in 2005.
In September the program adopted SAML 2.0, and the General Services
Administration announced it was turning interoperability testing over
to the Liberty Alliance Project. That project, a coalition of 160
industry, nonprofit and government organizations including GSA and the
Defense Department, sponsors standards development for federated identity
management. E-Authentication Solutions forms part of the administration's
e-government initiative. "The purpose is to provide credentialing
services for outward-facing government applications on the Web," said
Tom Kireilis, GSA's acting program executive. The E-Authentication
program provides Assurance Level 1 and 2 credentials, which can be a
user ID and password. Program leaders seek to build a system that would
allow users to sign on across many applications using a single set of
credentials. In addition to the domestic program, several other national
governments are deploying SAML 2.0-based applications to enable
identity-based access. Use of a common standard could allow federated
identity access controls across multiple enterprises. More Information See also the announcement: Click Here

Clean Up Your SOAP-based Web Services

Though SOAP's significance may diminish as Web services evolve, its
importance in the SOA marketplace for the time being is unquestionable.
Therefore, a substantial portion of the QA work by Web service providers
and consumers must entail verifying the accurate exchange of SOAP
messages. Not surprisingly, several SOAP-focused Web service testing
tools have appeared. I had an opportunity to look a five such tools:
AdventNet's QEngine, Crosscheck Networks SOAPSonar, iTKO's LISA,
Mindreef's SOAPscope Server, and Parasoft's SOAtest. Fundamentally,
testing a SOAP-based Web service involves three activities: constructing
a SOAP request, submitting it, and evaluating the response. As easy as
that sounds, it is anything but. An effective SOAP-testing tool cannot
simply rely on a user-friendly mechanism for building requests. It must
also enable the user to organize and arrange requests in realistic
sequences, provide a means of altering request input values, and
intelligently tweak requests so as to expose the Web service to a range
of good and bad usage scenarios. In short, you want the tool to run the
Web service through a reasonable approximation of real-world activity.
In addition, the tool must be equipped with a collection of gadgets for
evaluating responses. Such gadgets should include everything from simple
string matching to executing an arbitrarily complex XQuery on the SOAP
payload. All of the tools reviewed here provide variations on the
preceding capabilities. All make valiant attempts to shield the user
from direct exposure to XML, and some keep users entirely in a
protective GUI so that coding is never necessary. More Information

KML 2.2: An OGC Best Practice

The Open Geospatial Consortium recently announced the approval of
"KML 2.2: An OGC Best Practice" (reference: OGC 07-113r1) as an
official OGC Best Practice document. "Google submitted KML (formerly
Keyhole Markup Language) to the Open Geospatial Consortium (OGC) to
be evolved within the OGC consensus process with the following goal:
KML Version 2.2 will be an adopted OGC implementation standard. Future
versions may be harmonized with relevant OpenGIS standards that
comprise the OGC standards baseline. There are four objectives for
this standards work: (1) That there be one international standard
language for expressing geographic annotation and visualization on
existing or future web-based online and mobile maps (2D) and earth 3D
browsers; (2) That KML be aligned with international best practices
and standards, thereby enabling greater uptake and interoperability
of earth browser implementations; (3) That the OGC and Google will
work collaboratively to insure that the KML implementer community is
properly engaged in the process and that the KML community is kept
informed of progress and issues; (4) That the OGC process will be used
to insure proper life-cycle management of the KML candidate standard,
including such issues as backwards compatibility. KML is an XML
language focused on geographic visualization, including annotation of
maps and images. Geographic visualization includes not only the
presentation of graphical data on the globe, but also the control of
the user's navigation in the sense of where to go and where to look.
KML is [thus] complementary to most of the key existing OGC standards
including GML (Geography Markup Language), WFS (Web Feature Service)
and WMS (Web Map Service). Currently, KML (2.2) utilizes certain
geometry elements derived from GML version 2.1.2. These elements include
point, line string, linear ring, and polygon." More Information See also OGC Best Practices Documents: Click Here

First Public Working Draft: HTML Design Principles

W3C announced that the HTML Working Group has published a First Public
Working Draft for "HTML Design Principles." This document describes the
set of guiding principles used by the HTML Working Group for the
development of HTML5, expected to define the fifth major revision of
the core language of the World Wide Web. These design principles are an
attempt to capture consensus on design approach in the areas of
compatibility, utility, interoperability, and universal access. From
the Introduction: "In the HTML Working Group, we have representatives
from many different communities, including the WHATWG and other W3C
Working Groups. The HTML 5 effort under WHATWG, and much of the work
on various W3C standards over the past few years, have been based on
different goals and different ideas of what makes for good design. To
make useful progress, we need to have some basic agreement on goals
for this group. These design principles are an attempt to capture
consensus on design approach. They are pragmatic rules of thumb that
must be balanced against each other, not absolutes. They are similar in
spirit to the TAG's findings in Architecture of the World Wide Web, but
specific to the deliverables of this group. Conformance for Documents
and Implementations: Many language specifications define a set of
conformance requirements for valid documents, and corresponding
conformance requirements for implementations processing these valid
documents. HTML 5 is somewhat unusual in also defining implementation
conformance requirements for many constructs that are not allowed in
conforming documents. This dual nature of the spec allows us to have
a relatively clean and understandable language for authors, while at
the same time supporting existing documents that make use of older or
nonstandard constructs, and enabling better interoperability in error

Wednesday, November 21, 2007

Syntext Xsl-Status: A Progress Tracking Tool for XSLT Stylesheet

Syntext developers recently announced the release of release of
Xsl-Status V1.3.0, described as "an indispensable progress tracking
tool for XSLT stylesheet developers." It helps you track which elements
of an XML Schema are supported in your XSLT stylesheet, what the
development status of XSLT templates is, and what template supports
a particular XML element. The new release features the following
enhancements: (1) The ability to generate multiple reports at a time;
(2) The ability to group generated reports; (3) The ability to generate
summary reports; (4) XML Catalogs support. The new version of Xsl-Status
has made it possible for Syntext Serna XSLT stylesheet developers to
generate reports for several document types simultaneously (e.g. DITA
Task, Topic, Concept) and have them grouped by category (e.g. Serna
DITA 1.3, Serna DITA 1.1). Multiple reports are displayed conveniently
as structured lists in summary reports, with links letting you access
a desired report easily. Xsl-Status was originally designed for
developers creating XSLT stylesheets for Syntext Serna WYSIWYG XML
editor. Some of the supported Schemas (e.g. DITA, Docbook, S1000D)
are rather large and contain hundreds of elements. In order to support
the evolving stylesheets, you need to know which elements are supported,
which elements have yet to be supported, which elements are being
tested, etc. To run this package, you need Python and XSLTProc installed
on your computer. The software is made available free under the terms
of the Apache License Version 2.0. More Information

Web Distributed Authoring and Versioning (WebDAV) SEARCH

Editors of the IETF Internet Draft "Web Distributed Authoring and
Versioning (WebDAV) SEARCH" have released a updated version, available
from the RFC Libraries. WebDAV provides a network protocol for creating
interoperable, collaborative applications. XML properties provide
storage for arbitrary metadata, such as a list of authors on Web
resources. These properties can be efficiently set, deleted, and
retrieved using the DAV protocol. DASL, the DAV Searching and Locating
protocol, provides searches based on property values to locate Web
resources. The updated document defines Web Distributed Authoring and
Versioning (WebDAV) SEARCH, an application of HTTP/1.1 forming a
lightweight search protocol to transport queries and result sets that
allows clients to make use of server-side search facilities. It is
based on the expired internet draft for DAV Searching and Locating.
"Requirements for DAV Searching and Locating" describes the motivation
for DASL. In this specification, the terms "WebDAV SEARCH" and "DASL"
are used interchangeably. DASL minimizes the complexity of clients so
as to facilitate widespread deployment of applications capable of
utilizing the DASL search mechanisms. The Query Grammar provides set
of definitions of XML elements, attributes, and constraints on their
relations and values that defines a set of queries and the intended
semantics. DASL at Work: (1) The client constructs a query using the
'DAV:basicsearch' grammar; (2) The client invokes the SEARCH method
on a resource that will perform the search (the search arbiter) and
includes a 'text/xml' or 'application/xml' request entity that contains
the query; (3) The search arbiter performs the query; (4) The search
arbiter sends the results of the query back to the client in the
response. The server MUST send an entity that matches the WebDAV
multistatus format.

Web Maps with the Google Map API

For the last five years, we had been using a proprietary solution to
manage a small percentage of the geographical information about various
university locations. This solution had only a few locations and would
run in only one browser on one operating system. Moreover, it required
users to download a big plug-in. Also, it wasn't stable under heavy use.
In this article, I present our solution -- a web front-end that utilizes
several aspects of the freely available Google Map API to provide a
usable, robust, cross-platform web map. To get the precise geographical
location for specific sites, you could use some kind of a geocoder tool.
There are several free ones (the Perl module Geo::Coder::US, for instance),
but most work only with U.S. addresses. For our purposes, we used Google
Earth (, which in its latest version combines satellite
imagery, maps, terrain, and 3D buildings. This tool gives a simple
interface to navigate over a global satellite map and manually assign
points of interest with markers, polylines, and polygons. This software
was so straightforward that we could give it to our team of rural
engineers and, after a few minutes of training, they were able to
represent a large amount of information that was scattered in a variety
of files in different engineering software formats. The original file
was converted to KML, short for "Keyhole Markup Language", an XML-based
language for managing three-dimensional geospatial data. This file
contained coordinates, labels, and even HTML descriptions, in a format
that was human readable and easy to process using XSLT. With the launch
of its most-recent mapping API, Google has provided web developers with
a feature-rich toolset for representing geographical information in a
web environment. Besides the various functionality that is already present
out-of-the-box, the JavaScript-based environment provides the necessary
facilities to extend the default behavior and satisfy even the most
challenging requirements. It is also remarkable that all these features
come in a package that has been developed from the ground up to be
compatible with most major environments (browsers).

WS-I Releases Updated Basic Profile 1.2 and 2.0 Specifications

Members of the Web Services-Interoperability Organization (WS-I) Basic
Profile Working Group are currently working on Basic Profile 1.2 and
Basic Profile 2.0. Updated drafts have been published for both
specifications. (1) The latest "Basic Profile Version 1.2" Working
Group Approval Draft defines a set of non-proprietary Web services
specifications, along with clarifications, refinements, interpretations
and amplifications of those specifications which promote interoperability.
This Profile is derived from the Basic Profile 1.1 by incorporating any
errata to date and including those requirements related to the
serialization of envelopes and their representation in messages from
the Simple SOAP Binding Profile 1.0. This Profile is NOT intended to
be composed with the Simple SOAP Binding Profile 1.0. The Attachments
Profile 1.0 adds support for SOAP with Attachments, and is intended to
be used in combination with this Profile. There are a few requirements
in the Basic Profile 1.2 that may present compatibility issues with
clients, services and their artifacts that have been engineered for
Basic Profile 1.1 conformance. However, in general, the Basic Profile
WG members have tried to preserve as much forwards and backwards
compatibility with the Basic Profile 1.1 as possible so as not to
disenfranchise clients, services and their artifacts that have been
deployed in conformance with the Basic Profile 1.1. (2) The "Basic
Profile Version 2.0" (Working Group Draft) is the first version of
the WS-I Basic Profile that changes the version of SOAP in the profile
scope from SOAP 1.1 to the W3C SOAP 1.2 Recommendation. As such,
clients, servers and the artifacts that comprise a Basic Profile 1.0,
1.1 or 1.2 conformant application are inherently incompatible with an
application that is conformant with the Basic Profile 2.0. However, in
general, the Basic Profile WG members have tried to preserve in the
Basic Profile 2.0 as much consistency with the guidance and constraints
of the Basic Profile 1.0, 1.1 and 1.2 as possible. This has been in
part facilitated by the fact that the WG tried to remain consistent
in the guidance and constraints of the original Basic Profile with the
decisions that were being made in the context of the W3C XML Protocols
WG as they were concurrently working on the SOAP 1.2 specificatons. For
the most part, issues that were resolved in the context of the
development of the Basic Profile 1.0, 1.1 and 1.2 were not revisited. More Information

Tuesday, November 20, 2007

NETCONF Configuration Interface Advertisement with WSDL and XSD

An initial draft of "NETCONF Configuration Interface Advertisement with
WSDL and XSD" has been released. IETF netconf WG made up NETCONF
protocol as a standard configuration protocol between a network
management system and network devices. By using this unified management/
configuration protocol, operators can reduce management/configuration
cost. Developers of the network management system (NMS) read
configuration interface definition document and write code that accesses
the configuration interface of the NETCONF device. Now, there are no
standard way to take XML Schema from a target NETCONF device. To implement
the NETCONF NMS, the developers should check the Schema that defines the
configuration data of the target NETCONF device. This memo describes a
configuration interface advertisement method for NETCONF device developers.
In the proposal, the developers take a configuration interface definition
information of target NETCONF devices. On their development environment,
they generate stab classes to control the devices. The NETCONF device
advertises their configuration interface by a WSDL file. The WSDL file
describes message type of each NETCONF operation of the device. The
WSDL file contains XML Schema in its types element and describes
definition of the types definition used to configuration data. By this
configuration interface advertisement, Network management System (NMS)
developers can improve their development efficiency of the NMS. The
document provides the requirements to NETCONF devices and the programming
model to suppress the implementation cost of the NMS that manages
NETCONF devices. First, this document standardizes how to describe
the configuration interface of NETCONF devices. Second, this document
standardizes how to describe the type definition of the XML elements
that occur in the NETCONF protocol messages. Last, this document
standardizes how to advertise the above configuration interface
definition information. More Information

Oracle Customers Like Compression, Storage Management, XML Handling

Carlo Tiu, senior programmer analyst with Northern California Power
Agency, works for an independent, "green" producer of hydro and
geothermal power that also coordinates contributions to the state
electrical grid from other independents. Its members include the cities
of Palo Alto, Lodi, and Santa Clara, which run their own generating
plants. One of 43,000 attendees at Oracle OpenWorld in San Francisco,
Tiu spoke about XML and XQuery. Tiu is overseeing the transformation
of the agency's information exchange systems from hard-to-implement,
point-to-point communications to one that captures and exchanges
standardized XML data. To him, the XML DB capabilities built into the
Oracle database are a lifesaver. For each producer, the agency must
capture on a regular basis the amount of wholesale power it's supplying
the grid and the value of that power. The data is captured in intervals
throughout the day, resulting in large XML files that must be processed
by the agency's system. The agency has designed XML schema for capturing
the data and is sharing that design with the other suppliers as open
source code. As more producers adopt it, it will become easier for
coordinators to see what's going on within the state power distribution
system. The agency's goal is not only to improve its own operations, but
"to bring up the level of XML expertise in the electricity marketplace
and reduce costs for all utilities." The Federal Energy Commission and
the California Independent System Operator, a non-profit corporation,
will require the use of XML data by suppliers in March of 2008. Oracle
has supported XQuery since the 10g version came out. Oracle 11g released
in July included more granular XML data storage and indexing,
enhancements that make handling large amounts of XML data more efficient.

Using XML and Jar Utility API to Build a Rule-Based Java EE Auto-Deployer

Today's Java EE application deployment is a common task, but not an easy
job. If you have ever been involved in deploying a Java EE application
to a large enterprise environment, no doubt you have faced a number of
challenges before you click the deploy button. For instance, you have
to figure out how to configure JMS, data sources, database schemas, data
migrations, third-party products like Documentum for web publishing,
dependencies between components and their deployment order, and so on.
Although most of today's application servers support application deployment
through their administrative interfaces, the deployment task is still far
from being a one-button action. In the first few sections of this article,
the author discusses some of the challenges of Java EE deployment. Then
he introduces an intelligent rule-based auto-deployer application, and
explain how it can significantly reduce the complexity of Java EE system
deployment. He also gives a comprehensive example showing how to build
XML rules using XStream utility library, how to extend and analyze the
standard Java EE packaging (EAR), and then perform a complex deployment
task just by pushing one button. Adding a rule-based auto-deployer in
your Java EE application deployment task has several benefits. It provides
centralized and transacted deployment management: In a very large
enterprise environment, applications are usually deployed on to hundreds
of different systems. This deployer application provides a good mean of
a centralized management. In addition, it can manage deployment as a
transacted action by implementing un-deploy/re-deploy methods. Therefore,
the deployer can rollback the deployment or switch to different version
very easily.

Visual Studio 2008 Ships

Microsoft has released its Visual Studio 2008 and the .Net Framework
3.5 to manufacturing. The technology is available to MSDN subscribers.
Company officials said Visual Studio 2008, codenamed Orcas, contains
more than 250 new features and delivers significant enhancements in
every edition, including Visual Studio Express and Visual Studio Team
System to enable developers of all levels -- from hobbyists to enterprise
development teams to build applications. Among the improvements in the
Orcas release is that Microsoft has made web development easier with
new support for Web server communication techniques for Asynchronous
JavaScript and XML and JavaScript Object Notation (AJAX/JSON) enabled
Web sites. Also, new ASP.NET controls allow for better page management
and templates, and Windows Communication Foundation (WCF) delivers
native support for RSS and REST (Representational State Transfer).
The .NET Framework 3.5 also delivers several new features, including
capabilities for Web 2.0, Service-Oriented Architecture (SOA) and
Software plus Services-based applications. Workflow enabled services
provide a new programming model classes that simplifies building
workflow-enabled services by using Windows Communication Foundation
and Windows Workflow Foundation. This allows .Net Framework developers
to build business logic for a service using Windows Workflow Foundation
and expose messaging from that service using WCF. Microsoft also has
placed additional Web services protocol support in WCF, including Web
Services Atomic Transaction (WS-AtomicTransaction) 1.1,
WS-ReliableMessaging 1.1, WS-SecureConversation, and Web Services
Coordination (WS-Coordination) 1.1.

MPDF: A User Agent Profile Data Set for Media Policy

Members of the IETF Session Initiation Proposal Investigation Working
Group have published an updated version of the "User Agent Profile
Data Set for Media Policy" Internet Draft. This draft specification
defines a document format for the media properties of Session Initiation
Protocol (SIP) sessions. Examples for media properties are the codecs
or media types used in a session. This document format is based on XML
and extends the Schema for SIP User Agent Profile Data Sets. It can be
used to describe the properties of a specific SIP session or to define
policies that are then applied to different SIP sessions. Section 8
supplies the RELAX NG Definition. The Framework for Session Initiation
Protocol (SIP) User Agent Profile Delivery and the Framework for SIP
Session Policies define mechanisms to convey session policies and
configuration information from a network server to a user agent. An
important piece of the information conveyed to the user agent relates
to the media properties of the SIP sessions set up by the user agent.
Examples for these media properties are the codecs and media types used,
the media-intermediaries to be traversed or the maximum bandwidth
available for media streams. The Media Policy Dataset Format (MPDF)
specification is defined in this document for SIP session media
properties. This format can be used in two ways: first, it can be used
to describe the properties of a given SIP session (e.g., the media types
and codecs used). These MPDF documents are called session info documents
and they are usually created based on the session description of a
session. Second, the MPDF format can be used to define policies for SIP
sessions in a session policy document. A session policy document defines
properties (e.g., the media types) that can or can not be used in a
session, independent of a specific session description. The two types
of MPDF documents, session information and session policy documents,
share the same set of XML elements to describe session properties. A
user agent can receive multiple session policy documents from different
sources. These documents need to be merged into a single document the
user agent can work with. This document specifies rules for merging each
of the XML elements defined. It should be noted that these merging rules
are part of the semantics of the XML element. User agents implement the
merging rules as part of implementing the element semantics. As a
consequence, it is not possible to build an entity that can mechanically
merge two session policy documents without understanding the semantics
of all elements in the input documents. More Information

DAISY Consortium and Microsoft Collaborate to Develop OpenXML to DAISY

The DAISY Consortium (Digital Accessible Information System Consortium)
recently announced a collaborative effort with Microsoft to the release
a "Save As DAISY XML" feature next year. The DAISY Standard has been
adopted throughout the world by libraries and organizations producing
and distributing accessible reading materials. This collaboration
between Microsoft and the DAISY Consortium is a major breakthrough in
the movement to provide feature-rich, structured information to the
millions of people around the world who are unable to read print due to
a visual, physical, perceptual, developmental, cognitive, or learning
disability. The free, downloadable plug-in for Microsoft Word will
convert Open XML-based word processing documents into DAISY XML,
technically referred to as DTBook. DAISY XML, or DTBook, is the
foundation of the globally accepted DAISY Standard for reading and
publishing navigable multimedia content. The DAISY XML that is generated
is the marked up file that can then be processed to produce DAISY Digital
Talking Books and other accessible formats. This "Open XML to DAISY XML"
converter will be one of several authoring and conversion options that
produce DTBook. The development will be hosted on SourceForge. "Save As
DAISY XML" creates a DAISY XML file (DTBook) which requires further
processing to become a DAISY Digital Talking Book (DTB). This additional
processing may be accomplished using a number of commercial and open
source conversion and production tools. More Information See also the NISO press release Click Here

Friday, November 16, 2007

IBM's Hosted Symphony: Will Anyone Listen?

IBM appears to be getting ready to offer its Lotus Symphony suite as a
hosted application, competing directly with Google Apps and Microsoft's
Office Live. Does IBM's entry into the on-demand desktop application
space signal trouble for Office? Microsoft's Office Live strategy is
still primarily focused on small business, for groups of 10 or fewer
users. It's not an enterprise-changing play. Microsoft's enterprise
applications on demand are more in the form of services, not desktop
tools -- Exchange and SharePoint, for example. IBM started giving away
Symphony for free in September, following a similar path to Sun's with
StarOffice, though is admittedly not the same thing as
Sun's commercial release. The chances, however, of a free Symphony
desktop suite displacing Office in the corporate world are close to nil.
And while a hosted version might be interesting to organizations still
using Lotus Notes, it's doubtful that it would upset anyone's applecart,
aside from Google's efforts... In a corporate environment, there's
concern over capturing workflow for compliance and the security of an
Internet-based tool -- which can be solved by hosting internally. But
if you're hosting it internally, you're really just solving one
problem -- software distribution -- and trading it for another set.
Now, you've got to manage the servers, deal with network bandwidth
demands as XML traffic goes up, and shift your storage needs from
network shared drives to server-side storage. That's not to say there
isn't anything interesting about hosted desktop applications. Hundreds
of organizations are already using hosted applications -- through
desktop virtualization via Citrix and Terminal Server.

Web Services Hints and Tips: JAX-RPC versus JAX-WS, Part 5

Java API for XML-based RPC (JAX-RPC) supports the SOAP with Attachments
(Sw/A) specification, while Java API for XML Web Services (JAX-WS)
supports Sw/A along with the new Message Transmission Optimization
Mechanism (MTOM) specification. The attachment model for JAX-RPC is
Sw/A. Since JAX-RPC was written, a new attachment model has come onto
the scene: MTOM. JAX-WS provides Sw/A support just as JAX-RPC does, but
adds support for MTOM. JAX-WS supports MTOM via the Java Architecture
for XML Binding (JAXB) specification, which includes APIs for marshalling
and unmarshalling both Sw/A and MTOM attachments. In this article the
author examines both models through examples. This tip compares only
the WSDLs and the Java programming models; comparing the wire-level
messages is left as an exercise to the reader. JAX-RPC supports the Sw/A
model. JAX-WS supports Sw/A as well, but it has also stepped up to the
new MTOM model. MTOM is an improvement upon Sw/A in a number of ways:
(1) Everything necessary to create a Java interface is now available
in the WSDL interface; (2) MTOM is usable in a document/literal wrapped
WSDL; (3) MTOM allows optimization to attachments, but it doesn't force
attachments like Sw/A does.

Additional XML Security Uniform Resource Identifiers (URIs)

This document expands and updates the list of URIs intended for use
with XML Digital Signatures, Encryption, Canonnicalization, and Key
Management specified in RFC 4051 (April 2005). These URIs identify
algorithms and types of information. XML Digital Signatures,
Canonicalization, and Encryption have been standardized by the W3C
and by the joint IETF/W3C XMLDSIG working group. All of these are now
W3C Recommendations and IETF Informational or Standards Track documents.
All of these standards and recommendations use URIs (RFC 3986) to
identify algorithms and keying information types. This document is
a convenient reference list of URIs and descriptions for algorithms
in which there is substantial interest but which can not or have not
been included in the main documents for some reason. Note in particular
that raising XML digital signature to Draft Standard in the IETF
required remove of any algorithms for which there was not demonstrated
interoperability from the main standards document. This required
removal of the Minimal Canonicalization algorithm, in which there
appears to be continued interest, to be dropped from the standards
track specification. It was included in RFC 4051 and is included here.

Open Source and Messaging's Future

This article presents a conversation with Art Botterell, national
expert in warning systems and former FEMA official. When Art Botterell
was helping develop public warning systems in California a decade ago,
the state already had sirens and broadcast TV messaging. So he and
others began adding telephones, weather radios and computers. He saw
an urgent need for a common messaging format that would be freely
available to all vendors and users. He helped organize a grass-roots
effort in 2000 and 2001 for more than 100 computer programming
volunteers active in emergency management to create an Extensible
Markup Language format for public warning messages. It was named the
Common Alerting Protocol (CAP). Botterell: "What we found was that
if you start with the technology, you have to devise the message to
fit the technology. With CAP, we started with the social science and
the need for public warnings. We defined the characteristics of an
effective warning system and messaging system and developed it from
there to fit multiple devices and formats. An effective message has
to hit you two to three times, so it has to be multimodal. Most people
will not evacuate based on a solitary message... You can prepare for
a disaster with the National Incident Management System and the National
Response Framework, but the reality is always messy and unpredictable.
There always will be chaos and people who have not worked together
before. You need something like a Google search engine available to
help officials quickly identify all assets available for response,
regardless of the source. That is the next frontier. We need help
with navigation, indexing and discussing. We constantly need innovation
to solve the really deep and interesting problems. If we allow the
existing set of contractors to define the space, they will define it
with solutions that they already have. I hope that the CAP can serve
as an example of an alternative way of doing things from the grass-roots.
Open-source computing is a vital partner for developing solutions.

Mobile AJAX: Frequently Asked Questions

Mobile AJAX is the extension of AJAX principles to the mobile
environment, which includes other constrained devices such as gaming
consoles or set-top boxes featuring Web browsers. While technologically
the same thing, Mobile AJAX is looked at as a special case of AJAX,
as it deals with problems specific to the mobile space including the
areas of constrained devices and constrained Web browsers in general.
AJAX itself is a browser technology that involves the use of existing
Web standards and technologies (XML/XHTML, DOM, CSS, JavaScript, XHR
- XMLHttpRequest) to create more responsive Web applications that
reduce bandwidth usage by avoiding full page refreshes and providing
a more "desktop application-like" user experience. At a minimum, the
requirements for Mobile AJAX include: (1) JavaScript support; (2)
XMLHttpRequest object or equivalent ActiveX (for IE only); (3) DOM
manipulation functions or innerHTML support -- to display request
results. On the one hand, Mobile AJAX will be transparent to the end
user. For instance, all Nokia devices supporting the S60 and Opera
browsers support AJAX - but that makes little difference to the end
user. On the other hand, Widgets are enabled by Mobile AJAX. Thus,
the visual (end user) manifestation of Mobile AJAX may be in the form
of Widgets or rich browser-based applications such as we see on new
Nokia phones or Opera browsers.

W3C mobileOK Checker: Automatic Verification of mobileOK Content

W3C has announced the W3C mobileOK Checker (alpha) as a new means for
people to create and find mobile friendly content. W3C invites Web
authors to run the alpha release of the W3C mobileOK checker and make
their content work on a broad range of mobile devices. The mobileOK
Checker runs tests defined in the "W3C mobileOK Basic Tests 1.0"
specification, which has recently been advanced to the status of
Candidate Recommendation. The tests themselves are based upon W3C's
Mobile Web Best Practices 1.0, published as part of W3C's Mobile Web
Initiative. The Best Practices describes how to reduce the cost of
authoring and to improve the mobile browsing experience. Any tool that
implements the Basic Tests can verify automatically whether content
is mobile friendly. The "mobileOK Basic" specification defines a
scheme for assessing whether Web resources (Web content) can be
delivered in a manner that is conformant with Mobile Web Best Practices
to a simple and largely hypothetical mobile user agent, the Default
Delivery Context. This document describes W3C mobileOK Basic tests for
delivered content, and describes how to emulate the DDC when requesting
that content. mobileOK Basic is the lesser of two levels of claim, the
greater level being mobileOK Pro, described separately. Claims to be
W3C mobileOK Basic conformant are represented using Description
Resources (POWDER) also described separately. The intention of mobileOK
is to help catalyze development of Web content that provides a
functional user experience in a mobile context.

Manakin: A New Face for DSpace

Manakin is an abstract framework for building repository interfaces
that currently provides an implementation of the DSpace digital
repository system. The Manakin framework introduces three unique
concepts: the DRI schema, Aspects, and Themes. These are the basic
components a Manakin developer will use in creating new functionality
for a repository or modifying the repository's look-and-feel. Manakin
is built on the Apache Cocoon web development framework, which uses an
XML-based pipeline architecture. The pipelined architecture means that
an individual page is generated through the combination of many
components arranged together along a "pipeline", each feeding into
another until the final page is produced. Using this technique, web
sites are built through the arrangement of these pipeline components,
an approach that is sometimes referred to as a 'Lego-like'. Manakin
builds upon these basic Cocoon concepts to create the DRI schema,
aspects, and themes. The Digital Repository Interface (DRI) is an XML
schema that describes an abstract representation of a repository page.
Since repositories fundamentally interact with artifacts and their
metadata, the DRI schema must be able to both encode structural concepts
and natively represent metadata in various forms. The structural
portions of the schema are derived from the Text Encoding Initiative
(TEI) schema for its simplicity and expressiveness. The metadata
portions of DRI utilize the METS schema for packaging and encoding
relationships between an item's components. The descriptive metadata
contained within the METS document can be in any one of numerous
formats. At the present time Manakin supports DIM (DSpace Intermediate
Metadata Format), the XML-based Metadata Object Description Schema
(MODS), and qualified or simple Dublin Core. In the future more advanced
or content-specific formats might also be supported.

Nortel Launches SOA Initiative

Nortel Networks is launching its first foray into SOA-based
communications as it takes the wraps off its Raptor project November
14, 2007 and announces an alliance with partner IBM. Nortel's strategy
leverages service-oriented architecture and Web services technology to
enable both enterprises and carriers to quickly integrate communications
and business processes. Nortel's overall Communications Enablement
strategy is based on four pillars: the implementation of Web services
on specific products; its new software development foundation that
speeds the integration of communications functions into applications
and business processes; alliances with multiple partners, starting with
IBM; and the formation of a Nortel global services practice to support
the SOA-based applications and services. Raptor is a tool kit that
allows developers to more easily integrate functions such as click-to-call,
presence, location and context into applications and business processes.
Developers don't have to know the details of the underlying communications
technology that delivers the connectivity. The Raptor foundation will
not only leverage IBM's WebSphere middleware, but also be integrated
with IBM's Lotus Sametime communications and collaboration platform.
Integration with Sametime will allow functions such as click-to-call,
click-to-conference, presence and shared directory services to be added
quickly. For example, a Sametime user could, from within the Sametime
client, see if a contact's phone is in use.

Data Binding With Castor, Part 1

The Castor project provides data binding capabilities to the open source
realm. It works much like Sun's JAXB, and adds enhanced mapping and
binding to relational database tables. This article shows how to take
the first steps to get Castor to run on your own machine with downloading,
installation, setup, configuration, class path issues, and more. Castor
is an almost-drop-in replacement for JAXB. In other words, you can change
all of your JAXB code to Castor with very little trouble. It's not an
exact replacement, but it's close enough to make the task simple for
even newbie programmers. Castor offers quite a bit more in the data
binding area, allows you to convert between Java and XML without a schema,
an easier-to-use binding schema than JAXB, and the ability to marshal and
unmarshal from a relational database, as well as XML documents. Castor
also provides JDO capabilities. JDO stands for Java Data Objects, and
it's the underlying technology that drives the Java-to-RDBMS marshalling
and unmarshalling. JDO isn't quite as popular as it was a few years ago,
but it's still a nice feature to have. Additionally, because JDO is
another Sun specification, you won't write code to an obscure API.

SEC Readies XBRL Tagging Rules for Financial Filings

U.S. businesses could be required to file financial reports formatted
in XBRL, top Securities and Exchange Commission officials said November
13, 2007. John White, head of the SEC's division of corporate finance,
and Conrad Hewitt, the SEC's chief accountant, told the Financial
Executives International Conference in New York that the SEC is in the
process of shaping an XBRL (Extensible Business Reporting Language)
proposal to make it mandatory in required filings. The SEC currently
has a voluntary program for tagging financial documents with XBRL.
Microsoft, General Electric and United Technologies are already
participating in the program. A member of the XML family, XBRL is a
machine-readable language for business and financial data. Instead of
treating financial information as a block of text -- as in a standard
Internet page or a printed document -- it provides an identifying tag
for each individual item of data. The open-source, royalty-free language
is being developed by an international non-profit consortium of
approximately 450 major companies, organizations and government agencies.
Of the countries attending the conference, China is moving fastest on
XBRL, already requiring interactive data filing for the full financial
statements of all listed companies in quarterly, half-year and annual
reports. Japan has mandated XBRL for all public companies by the end
of the second quarter of next year. Korea has instituted a voluntary
XBRL program that began last month. Almost 30 companies are already
filing their full financial information using XBRL.

iPhone Gets Add-On Boost from Transmedia's Glide Mobile

Just in time for the iPhone release in the U.K. and Germany on Friday,
online media management and collaboration provider Transmedia launched
an application for creating Word Documents, Web sites, and PDFs on the
popular device. Transmedia's Glide Mobile is a Web-based AJAX and HTML
application that can be accessed through the iPhone's Safari browser.
Once a person signs up for a Glide Mobile account, they can create,
access, and editMicrosoft Word or Open Office documents on their iPhone
-- an option that doesn't come pre-installed on the device. Subscribers
can use many of the same features they're used to on the desktop, such
as bolding, italicizing, or underlining text, as well as creating bullet
points. Documents created on the iPhone can also be converted to PDF
files. The application automatically syncs up and converts desktop
Microsoft Word documents for access on the iPhone. But an Internet
connection is required so that Glide Mobile can send a signal to
Transmedia's servers to trigger the automatic synching. Glide Mobile
can also be used to create media rich documents on the iPhone, since
it offers the option of inserting photos, music, video, bookmarks,
calendar events, and more. Window Media Player videos exported from
Windows-based PCs are converted to QuickTime through Glide Mobile,
making them viewable on the iPhone. More Information

Search Web Services Version 1.0

OASIS announced a 30-day review for a TC Discussion document titled
"Search Web Services Version 1.0," produced by members of the OASIS
Search Web Services Technical Committee. This document was prepared
as a strawman proposal for public review, intended to generate
discussion and interest. It has no official status. Summary: "The
Search web service is a means of opening a database to external enquiry
in a standardized manner that facilitates discovery of query and
response possibilities and makes it possible for heterogeneous
databases to be queried simultaneously with the same or similar queries.
Client software can be easily configured using a standardized XML
explain document that is accessible from the base URL or via the
explain operation. In contrast with protocols such as SQL and XQuery,
detailed knowledge of a database's structure is not necessary as the
explain document contains parsable information on server defaults,
searchable indexes and record schemas that are returned in the response."
The new specification itself is based on the SRU (Search Retrieve via
URL) specification which can be found at the U.S. Library of Congress
web site. SRU is a standard XML-focused search protocol for Internet
search queries, utilizing CQL (Contextual Query Language), a standard
syntax for representing queries. It is expected that the OASIS standard,
when published, will deviate from SRU. How much it will deviate cannot
be predicted at this time. The fact that the SRU spec is used as a
starting point for development should not be cause for concern that
this might be an effort to rubberstamp or fasttrack SRU. The committee
hopes to preserve the useful features of SRU, eliminate those that are
not considered useful, and add features that are not in SRU but are
considered useful. The committee has decided to request OASIS to release
this as a discussion document. Detailed review of this document is
premature at this point and is not requested; feedback on the
functionality and approach is solicited. Please send comments by
December 7, 2007.

Associating Resources with Namespaces

The editors of the document "Associating Resources with Namespaces"
have released an updated editors' draft. This TAG finding addresses
the question of how ancillary information (schemas, stylesheets,
documentation, etc.) can be associated with a namespace. Section 4
"Namespace URIs and Namespace Documents" (earlier section title was:
"Identifying Individual Terms") has been expanded to include: 4.1
"Namespace URIs and Namespace Documents: The XML language case"; 4.2
"Namespace URIs and Namespace Documents: The Semantic Web case"; 4.3
"GRDDL and Namespace documents." From the Preface: The names in a
namespace form a collection: (1) Sometimes it is a collection of
element names -- DocBook and XHTML, for example; (2) sometimes it
is a collection of attribute names -- XLink, for example; (3)
sometimes it is a collection of functions -- XQuery 1.0 and XPath
2.0 Data Model; (4) sometimes it is a collection of properties --
FOAF; (5) sometimes it is a collection of concepts (WordNet), and
many other uses are likely to arise. Given the wide variety of things
that can be identified, it follows that an equally wide variety of
ancillary resources may be relevant to a namespace. A namespace may
have documentation (specifications, reference material, tutorials,
etc., perhaps in several formats and several languages), schemas
(in any of several forms), stylesheets, software libraries, applications,
or any other kind of related resource. The names in a namespace
likewise may have a range of information associated with them...
[In this document] we define a conceptual model for identifying related
resources that is simple enough to garner community consensus as a
reasonable abstraction for the problem; we show how RDDL 1.0 is one
possible concrete syntax for this model; and we show how other
concrete syntaxes could be defined and identified in a way that
would preserve the model.

Ajax-based Persistent Object Mapping

Virtually all applications use some form of persistence; that is, they
save information for future execution. Generally, the ability to persist
information for later retrieval is a critical aspect of applications,
and as Web applications increasingly integrate user interaction and
contribution, persistence becomes more important. However, persistence
often requires saving state information in a way that's conceptually
different from how the data exists in the execution of the program.
Within the execution of a program, state information is typically stored
in objects (at least, in object-oriented programs) but persisted either
into databases or into text- or character-based formats. The
transformation of state information back and forth between these two
paradigms can often require significant development work and is highly
susceptible to errors. Persistent object-mapping strategies can provide
automation for state storage and retrieval by mapping objects to
persistent data. Such mapping can also provide a simple mechanism for
accessing persistent state and saving that state. The Persevere persistent
object framework brings persistent object mapping to the browser
JavaScript environment. Object persistence has seen great popularity in
the Java programming and Ruby worlds, and the dynamic JavaScript language
is naturally well suited to mapping objects to persisted data. Persevere
automates mapping and communication in Asynchronous JavaScript + XML
(Ajax)-based Web applications in addition to simplifying much of the
development challenge by providing a manageable data model, transparent
client-server Ajax interchanges, automatic state change storage, and
implicit transaction management. By using orthogonal persistent object
mapping, you can rapidly develop powerful Ajax applications by using
simple, familiar JavaScript code. The complexity of writing Ajax requests,
serialization, and database interaction can easily be handled by
Persevere to provide object-oriented access to persisted data for rapid
application development.

Mobile Web Leaders Push for Open Standards

The technological barriers and business models that have led to the
current morass of proprietary handheld devices, closed-off carrier
networks, and specialized wireless applications must be eliminated if
the mobile Internet is to become as powerful and ubiquitous as it should
someday be, according to industry leaders. Content providers,
applications developers, and mobile carriers, along with standards
backers like Tim Berners-Lee -- the so-called father of the World
Wide Web -- stumped for greater openness in the platforms being used
to develop future wireless online systems at the ongoing Mobile Internet
World conference in Boston on Wednesday. While the lion's share today's
of mobile Web applications do not work across multiple devices, wireless
service plans, and software environments, the potential of the mobile
Internet will only be realized when providers across the industry shift
from proprietary systems to open standards, experts presenting at the
conference said. Representatives from carrier Sprint Nextel, phone maker
Nokia, applications vendor Opera, and even content producer MTV pledged
their commitments at the conference to embrace the call of industry
leaders like Berners-Lee to move away from the proprietary systems they
have previously fostered and to adopt more standards-based platforms.
Berners-Lee said that his invention of the World Wide Web would have
never had the same unilateral influence and adoption that it has enjoyed
if it had been created only to work on a certain type of device or
operating system.

Tuesday, November 13, 2007

Client-side WSDL Processing with Groovy and Gant

"As part of a cross-platform Web service testing team responsible for
testing functional aspects as well as the performance, load, and
robustness of Web services, I recently realized the need for a small,
easy-to-use, command-line-based solution for WSDL processing. I wanted
the toolset to help testers and developers check and validate WSDL 1.1
files coming from different sources for compatibility with various Web
service frameworks, as well as generating test stubs in Java to make
actual calls. For the Java platform, that meant using Java 6 wsimport,
Axis2, XFire, and CXF. We also needed an environment based on Visual
Studio .Net, and C# that tested WSDL and the services themselves in a
pure-Windows environment. We started client-side test development with
XFire, but then switched to Axis2 because of changing customer
requirements in our agile project. We also used ksoap2 -- a lightweight
Web service framework especially for the Java ME developer. Finally, I
decided to use Groovy and a smart combination of Groovy plus Ant,
called Gant. The components I have developed for the resulting Toolset
can be divided into two groups: (1) The Gant part is responsible for
providing some "targets" for the tester's everyday work, including the
WSDL-checker and a Java parser/modifier component. (2) The WSDL-checker
part is implemented with Groovy, but callable inside an Ant environment
(via Groovy's Ant task) as part of the daily build process. This
article presents a small toolset based on Groovy, Gant, and Java that
could support your daily work in this area, especially if you are a
tester." More Information

NETCONF Access Control Profile for XACML

"The Network Configuration Protocol defines an XML-based protocol for
managing network device configuration databases." The NETCONF protocol
uses a remote procedure call (RPC) paradigm. A client encodes an RPC
in XML and sends it to a server using a secure, connection-oriented
session. The server responds with a reply encoded in XML. The contents
of both the request and the response are fully described in XML DTDs
or XML schemas, or both, allowing both parties to recognize the syntax
constraints imposed on the exchange. The NETCONF remote network
configuration protocol currently lacks an access control model. The
need for such a model has been recognised within the NETCONF working
group. The Extensible Access Control Markup Language (XACML) is an
XML-based access control standard, with widespread acceptance from
the industry and good open-source support. This document proposes a
profile that defines how to use XACML to provide fine-grain access
control for NETCONF commands. More Information

Friday, November 9, 2007

Use an XForms Document as a Custom XML Edito

A previous article in this series showed how to use XSLT 2.0 to
transform an XML tournament document into an HTML bracket that displayed
the tournament results. What we didn't address in that article is how
to fill in the winners and losers for that XML tournament. In this
article, we'll revisit our XML tournament and create an XForms document
that lets us fill in the tournament results without an angle bracket in
sight. The result is an attractive editor for our bracket document type,
complete with Ajax-like effects. Best of all, our use of XForms means
the custom editor is built with declarative markup and is based on the
data structures in the XML document itself. The article addresses: (1)
Defining the layout of the XHTML page; (2) Importing the data model
(our XML bracket) into the XForms document; (3) Defining the panels that
display the matchups; (4) Defining the panel that displays the bracket;
(5) Defining the navigation buttons; (6) Defining the XForms actions
to save and reset the tournament data. A user who selects the winners
of the 15 matchups automatically creates a complete, valid XML document.
To simplify development and maintenance, we refactored the markup in
our XForms document by generating it with an XSLT stylesheet. In our
example here, we simply wrote that document to a file; we could have
just as easily submitted the XML document to a Web application. Best
of all, everything in the XForms document is tied directly to the XML
data model. More Information, See also XML and Forms: Click Here

WebLogic Server 10.3 Tech Preview Highlights

BEA has just released a Technical Preview of WebLogic Server 10.3. This
release focuses on three enhancement areas that we believe will improve
the developer experience for you if are an existing WebLogic Server
developer, or that will attract you to WebLogic Server if you are not
currently using the product. The first enhancement area is making WebLogic
Server more "lightweight". The term "lightweight" means different things
to different people, including characteristics such as "faster download",
"smaller disk footprint", "less memory consumption", "faster deployment",
or "faster server startup". The primary underlying requirement is to
enable developers to be more productive by reducing the resources and
time consumed by the server and server-related actions. WebLogic Server
10.3 includes new and updated support for Web Services standards,
especially OASIS WS-* standards such as WS-Security, WS-Policy, WS-Reliable
Messaging and WS-Addressing. WebLogic Server provides an environment for
developing and hosting SOA Services, and is the foundation for BEA's SOA
offering. WebLogic Server 10.3 delivers new features for developing
services and application for Service-Oriented Architectures. First we're
enhancing Web Services standards support for both JAX-RPC (J2EE 1.4) and
JAX-WS (Java EE 5) Web Services. Coming soon will be Service Component
Architecture (SCA) support, which will enable standards-based development
of composite applications. This will be made available in coming months
in preview form as an add-on to the WebLogic Server 10.3 technology
preview. Another enhancement area is enterprise technology integration
and standards updates. WebLogic Server applications must coexist and
interoperate with other technologies via de facto or de jure standards
to support development and execution of secure, high-performance and
high-availability enterprise applications. We've updated our support to
meet key customer and developer requirements in this area... The
Security Assertion Markup Language (SAML) is the standard for exchange
of security information in order to enable single sign-on across security
domains. This WebLogic Server 10.3 Technology Preview supports the SAML
2.0 standard (and brings forward existing SAML 1.1 support) to enable
single sign-on for Web apps as well as Web services. More Information

Report on Election Markup Language (EML) Interoperability Demonstration

Organizers of an EML Interoperability Demonstration have published a
report of the exercise: "All attendees of the OASIS Open Standards
Forum 2007 held in Ditton Manor UK were invited to participate in an
Interoperability Demonstration of the Election Markup Language (EML)
OASIS Standard. With their help the objective of the Demo was to show
how EML can be used in a multi-channel e-voting ballot involving several
suppliers... In conducting the Demo, EML's schemas 330, 410, 510 and
520 were used and examples of these are shown at Appendix C. All
personal data has been removed from these examples for obvious reasons.
The 330 schema was created from the Forum delegate list and sent to all
channel providers. They prepared their vote casting systems from this
schema and added appropriate validation routines to counter duplicate
and erroneous voting. At the conclusion of voting each channel provider
constructed a 510 schema with the number of votes and sent it to IBM,
who reconciled and counted the votes. The results were then posted to
a remote website using a 520 schema. This whole exercise was a very
global event as data was being captured by back-end systems in Nova
Scotia, Australia, Northern Ireland, as well as locally in Ditton Manor.
The paper ballots were scanned locally. All the data was sent
electronically to Belgium for counting and then posted to the remote
website for use in the final presentation at the Forum. An online ballot
results document is available. More Information

The Presence-ID Header Field

This document defines a header field that enables the author of an email
or netnews message to include a Presence URI in the message header
block for the purpose of associating the author with an address that
provides information about network availability, also known as "presence".
Several technologies enable entities to share information about their
network availability, also known as "presence". Such technologies include
XMPP-IM (Extensible Messaging and Presence Protocol (XMPP): Instant
Messaging and Presence) and the Session Initiation Protocol (SIP). To
facilitate the exchange of presence information, a URI scheme for
presence is defined in Common Profile for Presence (CPP). Because almost
all human users of presence systems also use email systems and because
many such users also use netnews systems, it can be helpful for such
users to specify their presence URIs in the messages they author. The
Presence-ID header field provides a standard location for such
information. This memo documents the syntax and implementation of the
Presence-ID header field, including the information necessary to register
it in the Permanent Message Header Field Registry maintained by the
IANA. The Presence-ID header field is associated with the author of the
message. If the "From:" header field contains more than one mailbox, the
Presence-ID header field should not be added to the message. There should
be no more than one instance of the Presence-ID header field. Upon
receiving a message containing a Presence-ID header field, a user agent
that supports the field should process the field by resolving the
presence URI in accordance with the procedures specified in CPP. A user
agent that has processed a Presence-ID header field may provide appropriate
interface elements if it has independent information linking the author
of the message with the specified presence URI (e.g., via a user-controlled
address book or automated directory lookup). If the user is subscribed
to the presence of the author, such interface elements might include
an indicator that the author is online and available for communication
over a network. More Information

HTML 5: Updated Editor's Draft

This updated specification (7-November-2007) defines the fifth major
revision of the core language of the World Wide Web, HTML. In this
version, new features are introduced to help Web application authors,
new elements are introduced based on research into prevailing authoring
practices, and special attention has been given to defining clear
conformance criteria for user agents in an effort to improve
interoperability. The specification represents a new version of HTML4
and XHTML1, along with a new version of the associated DOM2 HTML API.
Migration from HTML4 or XHTML1 to the format and APIs described in
this specification should in most cases be straightforward, as care
has been taken to ensure that backwards-compatibility is retained.
The specification is limited to providing a semantic-level markup
language and associated semantic-level scripting APIs for authoring
accessible pages on the Web ranging from static documents to dynamic
applications. Its scope does not include addressing presentation
concerns, although default rendering rules for Web browsers are included
at the end of this specification. The document has been produced by
members of the Web Hypertext Application Technology Working Group
(WHATWG), which focuses primarily on the development of HTML and APIs
needed for Web applications. The W3C HTML Working Group is the W3C
working group responsible for this specification's progress along the
W3C Recommendation track. This specification (the 7-November-2007
Editor's Draft), and has not yet been published as a W3C First Public
Working Draft. HTML 5 is the main focus of the WHATWG community and
also that of the (new) W3C HTML Working Group. HTML 5 is a new version
of HTML 4.01 and XHTML 1.0 addressing many of the issues of those
specifications while at the same time enhancing (X)HTML to more
adequately address Web applications. Besides defining a markup language
that can be written in both HTML (HTML5) and XML (XHTML5) it also
defines many APIs that form the basis of the Web architecture. These
APIs are known to some as "DOM Level 0" and have never been documented.
Yet they are extremely important for browser vendors to support existing
Web content and for authors to be able to build Web applications. More Information

W3C Plenary Day Program Convenes Experts on the Future of the Web

W3C announced that it welcomed over 400 experts from around the world
on 2007-11-07 to participate in a compelling "Plenary Day Program",
designesd to address issues shaping the future of the Web. The Wednesday
of the Technical Plenary Week offered a unique opportunity for a broad
W3C Community (Working, Interest and Coordination Groups; Advisory
Committee Representatives; Advisory Board; Technical Architecture Group;
and W3C Team) who have registered to gather in one room and discuss
technical topics of broad interest to the attendees, and of significant
importance to past, present and future of the World Wide Web Consortium.
Authors of the next version of HTML mixed it up with Semantic Web
developers, security experts, Web accessibility advocates, and the media
on the banks of the Charles River in Cambridge, Massachusetts (USA).
The program included a panel on the growing relationships between W3C
and the at-large developer community, the challenges HTML5 and XHTML2
propose to solve, and W3C's emerging vision of what's needed for video
on the Web. The day culminated with a talk by W3C Director Tim Berners-Lee:
"Cracks and Mortar", a review of the Web to date and a close look at
the gaps for signs of both wear and opportunity. The session "HTML 5,
XHTML 2.0, Future Formats" is now referenced in a blog. Details and
links are provided in the announcement, "W3C Community Convenes to
Discuss Web Future. Hundreds of International Participants Exchange
Ideas, Coordinate Work." More Information

Wednesday, November 7, 2007

Process XML Configuration Files with PHP

As a general rule, when you develop any reasonably-complex piece of
software, it's a good idea to take time to identify the product's key
configuration variables, and then separate these from the standard
variable namespace and place them in a separate area. With this process,
you can create a centralized repository of application configuration
information and simplify the task of modifying the product to work in
different environment. It can also help increase a developer's
familiarity with, and understanding of, the key pieces of information
needed to get the product up and running. Traditionally, configuration
variables are stored in one (or more) configuration files. XML provides
a convenient, easy-to-use expression language for an application's
configuration files. To extract this information into a PHP script can
sometimes pose a challenge. That's where the XJConf for PHP package
comes in: It provides an API to read XML-encoded information and
directly use it to configure PHP data structures like scalars, arrays
and PHP objects. This article introduces the package and demonstrates
some useful real-world applications of its usage, including configuring
complex class trees and building a Web-based configuration interface.
The XJConf package provides an easy-to-use, flexible API that reads
XML-formatted configuration files and converts the values found therein
into PHP data structures. In addition to simple string and numeric
values, it also supports the use of arrays and objects, and includes
built-in intelligence to automatically configure newly-instantiated
objects through setter methods. More Information See also the XJConf for PHP Web site: Click Here

Web Security Context: Experience, Indicators, and Trust

W3C announced the First Public Working Draft for "Web Security Context:
Experience, Indicators, and Trust." The specification deals with the
trust decisions that users must make online, and with ways to support
them in making safe and informed decisions where possible. In order to
achieve that goal, the specification includes recommendations on the
presentation of identity information by Web user agents; on handling
errors in security protocols in a way that minimizes the trust decisions
left to users, and (we hope) induces them toward safe behavior where
they have to make these decisions; and on data entry interactions that
will make it easier for users to enter sensitive data into legitimate
sites than to enter them into illegitimate sites. Where this document
specifies user interactions with a goal toward making security usable,
no claim is made at this time that this goal is met... To complement
the interaction and decision related parts of this specification,
[Section] 8 'Robustness' addresses the question of how the communication
of context information needed to make decisions can be made more
robust against attacks. Finally, [Section] 9 'Authoring' and deployment
best practices is about practices for those who deploy Web Sites. It
complements some of the interaction related techniques recommended in
this specification. The aim of that section is to provide guidelines
for creating Web sites with reduced attack surfaces against certain
threats, and with usefully provided security context information. More Information See also the Last Call Use Cases Working Draft: Click Here

Protocol for Web Description Resources (POWDER): Grouping of Resources

W3C's Protocol for Web Description Resources (POWDER) Working Group has
published the First Public Working Draft for Protocol for "Protocol for
Web Description Resources (POWDER): Grouping of Resources." POWDER
facilitates the publication of descriptions of multiple resources such
as all those available from a Web site. These descriptions are
attributable to a named individual, organization or entity that may or
may not be the creator of the described resources. This contrasts with
more usual metadata that typically applies to a single resource, such
as a specific document's title, which is usually provided by its author.
Description Resources (DRs) are described separately. This document sets
out how groups (i.e. sets) of resources may be defined, either for use
in DRs or in other contexts. Set theory has been used throughout as it
provides a well-defined framework that leads to unambiguous definitions.
However, it is used solely to provide a formal version of what is
written in the natural language text. Companion documents describe the
RDF/OWL vocabulary and XML data types that are derived from this and
the Description Resources document, setting out each term's domain,
range and constraints. As each term is introduced in this document, it
is linked to its description in the vocabulary document. More Information
See also W3C Semantic Web Activity: Click Here

Microsoft Puts the 'F' in Functional

Microsoft is targeting functional programming as a next big thing in
software development. F# -- pronounced "F sharp" -- is a functional
programming language out of Microsoft Research that the company will
productize to target developers dealing with concurrency and those in
the financial, scientific and technical, and academic arenas. At the
Object-Oriented Programming, Systems, Languages and Applications 2007
conference in October, some of Microsoft's leading language gurus,
including Jim Hugunin, Anders Hejlsberg and Erik Meijer, spoke of
coming to Montreal via Cambridge, England, where they had stopped in
on Don Syme, Microsoft's researcher heading up the F# project. Microsoft
had just announced plans to transition the technology from research to
product form under Visual Studio. Earlier in the conference, Hejlsberg,
a core creator of C#, said he has seen a resurgence of functional
programming and its influences. Functional programming treats
computation as the evaluation of mathematical functions and avoids
state and mutable data. Functional languages include APL, Erlang,
Haskell, Lisp, ML, Oz and Scheme. Microsoft's Meijer is one of the
creators of Haskell. S. "Soma" Somasegar, corporate vice president
of Microsoft's Developer Division: "Language features such as lambda
expressions in C# and generics in .Net 2.0 have roots in functional
languages, and LINQ (Language Integrated Query) is directly based on
functional programming techniques. Many ideas from functional
languages are helping us address some of the biggest challenges facing
the industry today, from the impedance mismatch between data and
objects to the challenges of the multi-core and parallel computing
space... Microsoft will fully integrate the F# language into Visual
Studio and continue innovating and evolving F#. In my mind, F# is
another first-class programming language on the CLR (Common Language
Runtime)." More Information

Saturday, November 3, 2007

Election Markup Language (EML) Version 5.0 Submitted for OASIS Approval

OASIS announced that the Election and Voter Services Technical Committee
has submitted the "Election Markup Language (EML) Version 5.0"
specification as an approved (CS 01) Committee Specification for
consideration as an OASIS Standard. A membership vote on the specification
is scheduled for November 16, 2007 through November 30, 2007. Institutional
representatives from Accenture, Boynings Consulting Ltd, EDS, Election
Systems & Software, IBM, Opt2Vote Ltd, Oracle, Secstan, and University
of California (Berkeley) are current members of the TC. The OASIS TC to
was chartered to develop a standard for the structured interchange among
hardware, software, and service providers who engage in any aspect of
providing election or voter services to public or private organizations
which is multinational (global acceptance), flexible (effective across
different voting regimes and voting channels), multilingual, adaptable
(support elections in private and public sectors), and secure. Members of
the TC have been liaising very closely with the IEEE Voting System
Electronic Data Interchange Project 1622, and their draft specification
is seen as a compatible subset and USA localisation of EML. EML is
designed to be flexible for use in elections and referendums that are
primarily paper-based or that are fully e-enabled. The EML v5.0 four-part
specification includes (1) EML Version 5.0 Process and Data Requirements,
edited by John Borras; it describes the background and purpose of the
Election Markup Language, the electoral processes from which it derives
its structure and the security and audit mechanisms it is designed to
support; (2) EML Data Dictionary, which defines the data used in the
processes and required to be handled by the XML schemas, providing in
tabular format information for each Data Element Name the EML schema
type, list of schemas in which the data element ocurs, and W3C XML
Schema (xs:) type; (3) EML Version 5.0 Schema Descriptions, which
provides an explanation of the core schemas used throughout, definitions
of the simple and complex datatypes, plus the EML schemas themselves
and also covers the conventions used in the specification and the use
of namespaces, as well as the guidance on the constraints, extendibility,
and splitting of messages; (4) EML Version 5.0 XML schemas serialized
in some 42 separate XSD files, also available in ZIP format. More Information See also the OASIS Election and Voter Services Technical Committee: Click Here