This paper was presented at the Ninth IEEE International Symposium on
Object and Component-Oriented Real-Time Distributed Computing. It
presents an overview of the Real Time Markup Language (RTML). RTML
is a XML profile which provides the syntactic representation for
describing the semantics of real time data for exchange over
distributed networked real time systems. We demonstrate a method which
uses descriptor based approach to describe the real time semantics in
distributed real time systems using XML. We have chosen XML Schema for
a number of advantages. In order to create a consistent schema that
contains the correct semantics offer by conceptual design model, we
have used the OO conceptual model transformation approach proposed by
L. Feng, et al., "Schemata Transformation of Object-Oriented Conceptual
Models to XML." As discussed in this paper, very often the individual
real time systems do not have knowledge of the application from each
other. So attempting to set up collaboration for data exchange will
be a major challenge and especially when it is dealing with multiple
heterogeneous sources. We introduce a descriptor based approach for
describing the resources using three types of descriptors namely
Concept Descriptor for illustrating the details of the features from
a real times systems; a Category Descriptor for organizing the logical
structure of Concept Descriptors whose share common similarities; and
Relationship Descriptor which describe the structure of a collection of
Concept Descriptors and Relationship Descriptors which illustrate the
semantics, syntax and organization of such a structure. We believe that
this work will be one of the first steps towards defining an approach
using XML to describe real time resources. By providing such a XML
profile, it establishes a knowledge based for organizations to exchange
data using XML messaging... RTML is a XML descriptor based profile.
Elements in RTML can be classified into three corresponding descriptor
types: Descriptor (D), DescriptorMapper (DM) or DescriptorCategory. The
use of descriptors allows multi level abstraction. Ds and DMs are
further defined into corresponding spectrums Apart from OMG SPT, RTML
has also adopted the specification developed by ISO and ITU-T document
number X.641 Recommendation (ISO/IEC IS 13236). This specification
provides a standard approach to describe QoS for different purposes,
and how can QoS be described from different levels or viewpoints and
precision. Along with the ISO/IEC specification, we have also adopt
QoS Profile. The meta-model in QoS Profile provides a lightweight
modeling component for conceptual design customization.The above
standards have provided fundamental constructs for conceptual model
transformation into RTML constraints. This design time construct can
be complemented by realizing additional details to enrich the
description to the design elements defined in those specifications.
RTML provides a reference model for the realization of abstract details.
Search This Blog
Sunday, September 30, 2007
XML Squisher Targets .NET CF-enabled Devices
AgileDelta has announced that its tool for enhancing the performance
of Xtensible Markup Language (XML) now supports Microsoft's .NET
Framework and .NET Compact Framework (CF). Efficient XML converts XML
into a binary format claimed to accelerate web applications, reduce
bandwidth utilization, and extend battery life. According to AgileDelta,
the compressed, binary format of XML data generated by Efficient XML
can be hundreds of times smaller (by boosting transmission speed) and
also slashes network traffic: "Now cell phones, PDAs, media players,
GPS receivers, and many other devices that are often constrained by
battery power, processing power, or memory can participate in the XML
ecosystem." There have been other methods of converting XML into binary,
potentially endangering the language's cross-platform compatibility.
AgileDelta, however, says its implementation has been selected as the
basis for the emerging global standard for binary XML by the Efficient
XML Interchange Working Group of the Worldwide Web Consortium (W3C).
The Efficient XML software development kit (SDK) includes support for
popular XML APIs, including SAX (the Simple API for XML), DOM (Document
Object Model), JAXP (Java API for XML Processing), and a pull-model
streaming API patterned on StAX (Streaming API for XML). Note: Earlier
in 2007 AgileDelta announced the release of its Efficient XML HTTP
Proxy Server and Client. "The proxies provide a simple, plug-in
solution for adding Efficient XML to existing browser, web services
and web-based applications without modifying application code. Now,
businesses of any size can use Efficient XML to significantly reduce
network bandwidth costs, increase data transfer speeds and give remote
workers high-speed access to the information they need. The Efficient
XML HTTP Proxy Server and Client support content negotiation, which
automatically detects and uses Efficient XML for clients and servers
that support it and falls back to text XML for those that do not.
This simplifies incremental roll-out of Efficient XML across an
enterprise and enables deployment in heterogeneous networks where not
all parties have upgraded to Efficient XML."
of Xtensible Markup Language (XML) now supports Microsoft's .NET
Framework and .NET Compact Framework (CF). Efficient XML converts XML
into a binary format claimed to accelerate web applications, reduce
bandwidth utilization, and extend battery life. According to AgileDelta,
the compressed, binary format of XML data generated by Efficient XML
can be hundreds of times smaller (by boosting transmission speed) and
also slashes network traffic: "Now cell phones, PDAs, media players,
GPS receivers, and many other devices that are often constrained by
battery power, processing power, or memory can participate in the XML
ecosystem." There have been other methods of converting XML into binary,
potentially endangering the language's cross-platform compatibility.
AgileDelta, however, says its implementation has been selected as the
basis for the emerging global standard for binary XML by the Efficient
XML Interchange Working Group of the Worldwide Web Consortium (W3C).
The Efficient XML software development kit (SDK) includes support for
popular XML APIs, including SAX (the Simple API for XML), DOM (Document
Object Model), JAXP (Java API for XML Processing), and a pull-model
streaming API patterned on StAX (Streaming API for XML). Note: Earlier
in 2007 AgileDelta announced the release of its Efficient XML HTTP
Proxy Server and Client. "The proxies provide a simple, plug-in
solution for adding Efficient XML to existing browser, web services
and web-based applications without modifying application code. Now,
businesses of any size can use Efficient XML to significantly reduce
network bandwidth costs, increase data transfer speeds and give remote
workers high-speed access to the information they need. The Efficient
XML HTTP Proxy Server and Client support content negotiation, which
automatically detects and uses Efficient XML for clients and servers
that support it and falls back to text XML for those that do not.
This simplifies incremental roll-out of Efficient XML across an
enterprise and enables deployment in heterogeneous networks where not
all parties have upgraded to Efficient XML."
Serve SQL Data in XML Format
Quite often, the data you have to serve to the client will reside in
SQL databases on the server, so you need an adapter between the tabular
binary SQL data and text-oriented hierarchical XML data. This article,
describes a few ways to extract data from SQL databases and serve it
to an AJAX application running in a web browser, depending on the SQL
database you use and the implementation flexibility you need. A large
number of AJAX applications expect that the data exchange between the
web server and the browser will be formatted as XML. If the server-side
data is stored in an SQL database, you can use a server-side script to
transform the tabular SQL format returned by an SQL query into an XML
document, or use the XML functionality embedded in your database server
to reduce server resource utilization. The server-side script might be
your only option if your database server lacks the required XML
functionality (for example, MySQL cannot return query results in XML
format), or if you have to perform extensive additional data processing
on the SQL results. In any case, you shouldn't generate the XML output
by writing individual tags and attributes to the output data stream,
because you might eventually forget one (or more) of the XML encoding
rules and therefore produce invalid XML documents; for example, you
might forget to encode the ampersand ('&') character in the attribute
values. It's much safer to use the DOM functions available in most
server-side scripting languages, build a DOM tree with the script,
and output the XML representation of the DOM tree as provided by the
DOM library. If your database server supports XML output of query
results, but you have to perform specialized data processing to get
the XML structure you need to return to the AJAX client, consider
server-side XSLT transformations. This technique might be faster than
using server-side scripts, even with the added overhead of additional
XML parsing. More Information
SQL databases on the server, so you need an adapter between the tabular
binary SQL data and text-oriented hierarchical XML data. This article,
describes a few ways to extract data from SQL databases and serve it
to an AJAX application running in a web browser, depending on the SQL
database you use and the implementation flexibility you need. A large
number of AJAX applications expect that the data exchange between the
web server and the browser will be formatted as XML. If the server-side
data is stored in an SQL database, you can use a server-side script to
transform the tabular SQL format returned by an SQL query into an XML
document, or use the XML functionality embedded in your database server
to reduce server resource utilization. The server-side script might be
your only option if your database server lacks the required XML
functionality (for example, MySQL cannot return query results in XML
format), or if you have to perform extensive additional data processing
on the SQL results. In any case, you shouldn't generate the XML output
by writing individual tags and attributes to the output data stream,
because you might eventually forget one (or more) of the XML encoding
rules and therefore produce invalid XML documents; for example, you
might forget to encode the ampersand ('&') character in the attribute
values. It's much safer to use the DOM functions available in most
server-side scripting languages, build a DOM tree with the script,
and output the XML representation of the DOM tree as provided by the
DOM library. If your database server supports XML output of query
results, but you have to perform specialized data processing to get
the XML structure you need to return to the AJAX client, consider
server-side XSLT transformations. This technique might be faster than
using server-side scripts, even with the added overhead of additional
XML parsing. More Information
Friday, September 28, 2007
Information Model and XML Data Model for Traceroute Measurements
IETF announced the availability of a new draft in the online Internet
Drafts directories. "Information Model and XML Data Model for
Traceroute Measurements" is a work item produced by members of the
IP Performance Metrics Working Group of the IETF. The IETF IPPM Working
Group Working Group was chartered to develop a set of standard metrics
that can be applied to the quality, performance, and reliability of
Internet data delivery services. These metrics are designed such that
they can be performed by network operators, end users, or independent
testing groups. Traceroute is a network diagnostic tool used to
determine the hop by hop path from a source to a destination and the
Round Trip Time (RTT) from the source to each hop. Traceroute can be
therefore used to discover some information (hop counts, delays, etc.)
about the path between the initiator of the traceroute measurement
and other hosts. Typically, the traceroute tool attempts to discover
the path to a destination by sending UDP probes with specific
time-to-live (TTL) values in the IP packet header and trying to elicit
an ICMP TIME_EXCEEDED response from each gateway along the path to
some host. Traceroutes are being used by lots of measurement efforts,
either as an independent measurement or to get path information to
support other measurement efforts. That is why there is the need to
standardize the way the configuration and the results of traceroute
measurements are stored. The standard metrics defined by IPPM working
group in matter of delay, connectivity and losses do not apply to the
metrics returned by the traceroute tool; therefore, in order to compare
results of traceroute measurements, the only possibility is to add to
the stored results a specification of the operating system and version
for the traceroute tool used. This document, in order to store results
of traceroute measurements and allow comparison of them, defines a
standard way to store them using a XML schema. Section 7 contains the
XML schema to be used as a template for storing and/or exchanging
traceroute measurements information. The schema was designed in order
to use an extensible approach based on templates (similar to how IPFIX
protocol is designed) where the traceroute configuration elements
(both the requested parameters, Request, and the actual parameters used,
MeasurementMetadata) are metadata to be referenced by results information
elements (data) by means of the TestName element (used as unique
identifier). Currently Open Grid Forum (OGF) is also using this
approach and cross- requirements have been analyzed. The XML schema
is compatible with OGF schema since it was designed in a way that both
limits the unnecessary redundancy and a simple one-to-one transformation
between the two exist.
Drafts directories. "Information Model and XML Data Model for
Traceroute Measurements" is a work item produced by members of the
IP Performance Metrics Working Group of the IETF. The IETF IPPM Working
Group Working Group was chartered to develop a set of standard metrics
that can be applied to the quality, performance, and reliability of
Internet data delivery services. These metrics are designed such that
they can be performed by network operators, end users, or independent
testing groups. Traceroute is a network diagnostic tool used to
determine the hop by hop path from a source to a destination and the
Round Trip Time (RTT) from the source to each hop. Traceroute can be
therefore used to discover some information (hop counts, delays, etc.)
about the path between the initiator of the traceroute measurement
and other hosts. Typically, the traceroute tool attempts to discover
the path to a destination by sending UDP probes with specific
time-to-live (TTL) values in the IP packet header and trying to elicit
an ICMP TIME_EXCEEDED response from each gateway along the path to
some host. Traceroutes are being used by lots of measurement efforts,
either as an independent measurement or to get path information to
support other measurement efforts. That is why there is the need to
standardize the way the configuration and the results of traceroute
measurements are stored. The standard metrics defined by IPPM working
group in matter of delay, connectivity and losses do not apply to the
metrics returned by the traceroute tool; therefore, in order to compare
results of traceroute measurements, the only possibility is to add to
the stored results a specification of the operating system and version
for the traceroute tool used. This document, in order to store results
of traceroute measurements and allow comparison of them, defines a
standard way to store them using a XML schema. Section 7 contains the
XML schema to be used as a template for storing and/or exchanging
traceroute measurements information. The schema was designed in order
to use an extensible approach based on templates (similar to how IPFIX
protocol is designed) where the traceroute configuration elements
(both the requested parameters, Request, and the actual parameters used,
MeasurementMetadata) are metadata to be referenced by results information
elements (data) by means of the TestName element (used as unique
identifier). Currently Open Grid Forum (OGF) is also using this
approach and cross- requirements have been analyzed. The XML schema
is compatible with OGF schema since it was designed in a way that both
limits the unnecessary redundancy and a simple one-to-one transformation
between the two exist.
W3C Candidate Recommendation: SPARQL Query Results XML Format
Members of the W3C RDF Data Access Working Group have announced the
advancement of the "SPARQL Query Results XML Format" specification to
Candidate Recommendation. W3C publishes a Candidate Recommendation to
gather implementation experience. The RDF Data Access Working Group has
already gathered implementation experience for this specification as
part of developing the "SPARQL Query" and "SPARQL Protocol"
specifications. Rather than request to advance directly to Proposed
Recommendation, the group has requested to use this Candidate
Recommendation period to identify additional implementations, such as
automated consumers of SPARQL XML Results, and to provide implementers
of SPARQL Query and SPARQL Protocol with a suitably stable specification.
The design has stabilized and the Working Group intends to advance this
specification to Proposed Recommendation once the exit criteria are met,
viz., when the SPARQL Query Results XML Format has at least two
implementations. The specification will remain a Candidate Recommendation
until at least 9-October-2007, and an implementation report will be
produced. The specification describes an XML format for the variable
binding and boolean results formats provided by the SPARQL query
language for RDF as part of the Semantic Web Activity. RDF is a flexible,
extensible way to represent information about World Wide Web resources.
It is used to represent, among other things, personal information, social
networks, metadata about digital artifacts like music and images, as
well as provide a means of integration over disparate sources of
information. A standardized query language for RDF data with multiple
implementations offers developers and end users a way to write and to
consume the results of queries across this wide range of information.
With the propoed format, SPARQL variable binding and boolean results can
be expressed in XML.
advancement of the "SPARQL Query Results XML Format" specification to
Candidate Recommendation. W3C publishes a Candidate Recommendation to
gather implementation experience. The RDF Data Access Working Group has
already gathered implementation experience for this specification as
part of developing the "SPARQL Query" and "SPARQL Protocol"
specifications. Rather than request to advance directly to Proposed
Recommendation, the group has requested to use this Candidate
Recommendation period to identify additional implementations, such as
automated consumers of SPARQL XML Results, and to provide implementers
of SPARQL Query and SPARQL Protocol with a suitably stable specification.
The design has stabilized and the Working Group intends to advance this
specification to Proposed Recommendation once the exit criteria are met,
viz., when the SPARQL Query Results XML Format has at least two
implementations. The specification will remain a Candidate Recommendation
until at least 9-October-2007, and an implementation report will be
produced. The specification describes an XML format for the variable
binding and boolean results formats provided by the SPARQL query
language for RDF as part of the Semantic Web Activity. RDF is a flexible,
extensible way to represent information about World Wide Web resources.
It is used to represent, among other things, personal information, social
networks, metadata about digital artifacts like music and images, as
well as provide a means of integration over disparate sources of
information. A standardized query language for RDF data with multiple
implementations offers developers and end users a way to write and to
consume the results of queries across this wide range of information.
With the propoed format, SPARQL variable binding and boolean results can
be expressed in XML.
An Analysis of XML Compression Efficiency
This paper was presented at the 2007 Workshop on Experimental Computer
Science (ExpCS '07) in San Diego, CA, 13-14 June 2007 (12 pages, with
51 references). XML has gained much acceptance since first proposed
in 1998 by the World Wide Web Consortium (W3C). The XML format uses
schemas to standardize data exchange amongst various computing systems.
However, XML is notoriously verbose and consumes significant storage
space in these systems. To address these issues, the W3C formed the
Efficient XML Interchange Working Group (EXI WG) to specify an XML
binary format. Although a binary format foregoes interoperability,
applications such as wireless devices use them due to system limitations.
Binary formats encode XML documents as binary data. The intent is to
decrease the file size and reduce the required processing at remote
nodes. If XML binary formats are to succeed, an open standard must be
established. The primary impetus for binary XML is the limited
capabilities of wireless devices, e.g., cell phones and sensor networks.
Further pressure to use a binary format comes from the growth of large
repositories, e.g., databases that store data using an XML format.
Technically, both compressed and binary formats are 'binary' formats,
versus plaintext, but binary formats may support random access and
queries, whereas compression formats often do not. Statistical methods
are often used for analyzing experimental data; however, computer
science experiments often only provide a comparison of means. We
describe how we used more robust statistical methods, i.e., linear
regression, to analyze the performance of 14 compressors against a
corpus of XML files we assembled with respect to an efficiency metric
proposed herein. Our end application is minimizing transmission time
of an XML file between wireless devices, e.g., nodes in a distributed
sensor network (DSN), for example, an unmanned aerial vehicle (UAV)
swarm. Thus, we focus on compressed file sizes and execution times,
foregoing the assessment of decompression time or whether a particular
compressor supports XML queries... We present an XML test corpus and
a combined efficiency metric integrating compression ratio and execution
speed. We also identify key factors when selecting a compressor. Our
results show XMill or WBXML may be useful in some instances, but a
general-purpose compressor is often the best choice. Additional
information about the study, including links to the XML corpus used
in the paper, is available as supporting data from Chris Augeri.
Science (ExpCS '07) in San Diego, CA, 13-14 June 2007 (12 pages, with
51 references). XML has gained much acceptance since first proposed
in 1998 by the World Wide Web Consortium (W3C). The XML format uses
schemas to standardize data exchange amongst various computing systems.
However, XML is notoriously verbose and consumes significant storage
space in these systems. To address these issues, the W3C formed the
Efficient XML Interchange Working Group (EXI WG) to specify an XML
binary format. Although a binary format foregoes interoperability,
applications such as wireless devices use them due to system limitations.
Binary formats encode XML documents as binary data. The intent is to
decrease the file size and reduce the required processing at remote
nodes. If XML binary formats are to succeed, an open standard must be
established. The primary impetus for binary XML is the limited
capabilities of wireless devices, e.g., cell phones and sensor networks.
Further pressure to use a binary format comes from the growth of large
repositories, e.g., databases that store data using an XML format.
Technically, both compressed and binary formats are 'binary' formats,
versus plaintext, but binary formats may support random access and
queries, whereas compression formats often do not. Statistical methods
are often used for analyzing experimental data; however, computer
science experiments often only provide a comparison of means. We
describe how we used more robust statistical methods, i.e., linear
regression, to analyze the performance of 14 compressors against a
corpus of XML files we assembled with respect to an efficiency metric
proposed herein. Our end application is minimizing transmission time
of an XML file between wireless devices, e.g., nodes in a distributed
sensor network (DSN), for example, an unmanned aerial vehicle (UAV)
swarm. Thus, we focus on compressed file sizes and execution times,
foregoing the assessment of decompression time or whether a particular
compressor supports XML queries... We present an XML test corpus and
a combined efficiency metric integrating compression ratio and execution
speed. We also identify key factors when selecting a compressor. Our
results show XMill or WBXML may be useful in some instances, but a
general-purpose compressor is often the best choice. Additional
information about the study, including links to the XML corpus used
in the paper, is available as supporting data from Chris Augeri.
Thursday, September 27, 2007
Diameter XML Dictionary
The Diameter Base Protocol (Diameter - IETF RFC 3588) is an extensible
protocol used to provide Authentication, Authorization, and Accounting
(AAA) services to different access technologies. It specifies the
message format, transport, error reporting, accounting and security
services to be used by all Diameter applications. To maintain
extensibility, Diameter uses a dictionary to provide it with the
format of commands and AVPs. This document describes the representation
of the Diameter dictionary using XML. The root or top-level element
of a Diameter dictionary is the 'dictionary' element. The dictionary
element contains zero or more 'vendor' elements, the 'base' element
and zero or more 'application' elements. The top-level XML file
containing the 'dictionary' element SHOULD be named 'dictionary.xml'.
Each 'application' element SHOULD be defined in a separate XML file
and referenced from the top-level XML file using an external entity
declaration. AVP rules elements define the placement of key AVPs within
commands. They are used to do some semantic checking at the protocol
layer. For example, a particular AVP might be required to be first in
a particular message. This element can define those rules. The
requestrules and answerrules elements define the placement of key AVPs
within request and answer commands respectively. These elements may
be used to perform syntax checking at the protocol layer. Click Here
protocol used to provide Authentication, Authorization, and Accounting
(AAA) services to different access technologies. It specifies the
message format, transport, error reporting, accounting and security
services to be used by all Diameter applications. To maintain
extensibility, Diameter uses a dictionary to provide it with the
format of commands and AVPs. This document describes the representation
of the Diameter dictionary using XML. The root or top-level element
of a Diameter dictionary is the 'dictionary' element. The dictionary
element contains zero or more 'vendor' elements, the 'base' element
and zero or more 'application' elements. The top-level XML file
containing the 'dictionary' element SHOULD be named 'dictionary.xml'.
Each 'application' element SHOULD be defined in a separate XML file
and referenced from the top-level XML file using an external entity
declaration. AVP rules elements define the placement of key AVPs within
commands. They are used to do some semantic checking at the protocol
layer. For example, a particular AVP might be required to be first in
a particular message. This element can define those rules. The
requestrules and answerrules elements define the placement of key AVPs
within request and answer commands respectively. These elements may
be used to perform syntax checking at the protocol layer. Click Here
Converting XML Schemas to Schematron
Recently Topologi have been working on an actual implementation for a
client [for converting XML Schemas to Schematron]: a series of XSLT 2
scripts that we want to release as open source in a few months time.
Why would you want to convert XSD to Schematron? The prime reason is to
get better diagnostics: grammar-based diagnostics basically don't work,
the last two decades of SGML/XML DTD/XSD experiences makes plain. People
find them difficult to interpret and they give the response in terms
of the grammar not the information domain. Basically, we have a two-stage
architecture: the first stage (3 XSLTs) takes all the XSD schema files
and does a big series of macro processes on them, to make a single
document that contains all the top-level schemas for each namespace,
with all references resolved by substitution (except for simple types
which we keep). This single big file gets rid off almost all the
complications of XSD, which in terms makes it much simpler to then
generate the Schematron assertions. We have so far made the preprocessor,
implemented simple type checking (including derivation by restriction)
and the basic exception content models (empty, ALL, mixed content),
with content models under way at the moment. I think the pre-processor
stage might be useful for other projects involving XML Schemas. More Information
client [for converting XML Schemas to Schematron]: a series of XSLT 2
scripts that we want to release as open source in a few months time.
Why would you want to convert XSD to Schematron? The prime reason is to
get better diagnostics: grammar-based diagnostics basically don't work,
the last two decades of SGML/XML DTD/XSD experiences makes plain. People
find them difficult to interpret and they give the response in terms
of the grammar not the information domain. Basically, we have a two-stage
architecture: the first stage (3 XSLTs) takes all the XSD schema files
and does a big series of macro processes on them, to make a single
document that contains all the top-level schemas for each namespace,
with all references resolved by substitution (except for simple types
which we keep). This single big file gets rid off almost all the
complications of XSD, which in terms makes it much simpler to then
generate the Schematron assertions. We have so far made the preprocessor,
implemented simple type checking (including derivation by restriction)
and the basic exception content models (empty, ALL, mixed content),
with content models under way at the moment. I think the pre-processor
stage might be useful for other projects involving XML Schemas. More Information
Friday, September 21, 2007
Manipulate XML Service Definitions with Java Programming
A Service-Oriented Architecture (SOA) typically exports a range of
services. For XML service modelling and subsequent consumption of
those services by users (people, machines, or other services), Java
technology provides powerful mechanisms to handle XML data, which in
turn provides a key foundation for using SOA concepts. SOA is still
unfolding, and many of the big software vendors are still developing
their SOA offerings. As a result, the SOA area is currently a complex
soup of technologies that includes Java Business Integration (JBI),
Intelligent Event Processing, and Business Process Execution Language
(BPEL) servers. It's entirely likely that user organizations that
intend to reap the benefits of SOA will have to invest heavily before
converging on a solution. By making SOA so complex, the industry might
well inadvertently pave the way for vendor lock-in, even though one
of the promises of SOA is standards-based, component-oriented,
vendor-independent computing. Is it possible for user organizations
to gain some useful operational SOA experience before the expensive
migration process? In answer to this question, this article
demonstrates a few important SOA principles with straightforward XML
and some Java code. It doesn't attempt to cover everything in the
SOA universe; instead, the coverage is restricted to a few key areas.
For example, you can conceivably use RSS to distribute XML service
definitions. However, for this article's example, the transport
mechanism uses Java facilities. The merit of such a focused approach
is that Java developers in user organizations can use the ideas to
build their own simple pilot SOA. Such pilot schemes can help the
organization realize the business benefits of SOA. Included in the
latter are modelling business services as computational services,
user self-service, greater automation, and more responsive services.
You can implement a migration like the one described as a stand-alone
pilot that operates in parallel to existing business processes. More Information
services. For XML service modelling and subsequent consumption of
those services by users (people, machines, or other services), Java
technology provides powerful mechanisms to handle XML data, which in
turn provides a key foundation for using SOA concepts. SOA is still
unfolding, and many of the big software vendors are still developing
their SOA offerings. As a result, the SOA area is currently a complex
soup of technologies that includes Java Business Integration (JBI),
Intelligent Event Processing, and Business Process Execution Language
(BPEL) servers. It's entirely likely that user organizations that
intend to reap the benefits of SOA will have to invest heavily before
converging on a solution. By making SOA so complex, the industry might
well inadvertently pave the way for vendor lock-in, even though one
of the promises of SOA is standards-based, component-oriented,
vendor-independent computing. Is it possible for user organizations
to gain some useful operational SOA experience before the expensive
migration process? In answer to this question, this article
demonstrates a few important SOA principles with straightforward XML
and some Java code. It doesn't attempt to cover everything in the
SOA universe; instead, the coverage is restricted to a few key areas.
For example, you can conceivably use RSS to distribute XML service
definitions. However, for this article's example, the transport
mechanism uses Java facilities. The merit of such a focused approach
is that Java developers in user organizations can use the ideas to
build their own simple pilot SOA. Such pilot schemes can help the
organization realize the business benefits of SOA. Included in the
latter are modelling business services as computational services,
user self-service, greater automation, and more responsive services.
You can implement a migration like the one described as a stand-alone
pilot that operates in parallel to existing business processes. More Information
Metadata Extraction Tool Version 3.2
The National Library of New Zealand (Te Puna Matauranga o Aotearoa)
has announced the release of version 3.2 of its open-source Metadata
Extraction Tool. The tool was developed to programmatically extract
preservation metadata from a range of file formats like PDF documents,
image files, sound files Microsoft office documents, and many others.
The Metadata Extraction Tool builds on the Library's work on digital
preservation, and its logical preservation metadata schema. The
preservation metadata schema details the data elements needed to
support the preservation of digital objects and will form the basis
for the design of a database repository and input systems for
collecting and storing preservation metadata. It incorporates a number
of data elements needed to manage the metadata in addition to metadata
relating to the digital object itself. The Metadata Extraction Tool
is designed to: (1) automatically extracts preservation-related
metadata from digital files; (2) output that metadata in a standard
format (XML) for use in preservation activities. Although designed
for preservation processes and activities, it can be used to for
other tasks such as the extraction of metadata for resource discovery.
Extracting preservation metadata is a two-stage process. In the first
phase each incoming file is processed by the adapters until one of
the adapters recognises the file type. That adapter extracts data
from the header fields of the file and generates an Extensible Markup
Language (XML) file. In the second phase an Extensible Stylesheet
Language (XSL) transformation converts the internal XML file into
an XML file in a useful format. The Tool currently outputs the XML
file using the NLNZ preservation metadata data model schema. The
Tool is written in Java and XML and is distributed under the Apache
Public License (version 2). Developers may be interested in extending
some of the key components of the Metadata Extraction Tool such as
extending existing adapters or developing new ones to process other
file types, or creating new XSLT files to generate different XML
output formats. More Information
has announced the release of version 3.2 of its open-source Metadata
Extraction Tool. The tool was developed to programmatically extract
preservation metadata from a range of file formats like PDF documents,
image files, sound files Microsoft office documents, and many others.
The Metadata Extraction Tool builds on the Library's work on digital
preservation, and its logical preservation metadata schema. The
preservation metadata schema details the data elements needed to
support the preservation of digital objects and will form the basis
for the design of a database repository and input systems for
collecting and storing preservation metadata. It incorporates a number
of data elements needed to manage the metadata in addition to metadata
relating to the digital object itself. The Metadata Extraction Tool
is designed to: (1) automatically extracts preservation-related
metadata from digital files; (2) output that metadata in a standard
format (XML) for use in preservation activities. Although designed
for preservation processes and activities, it can be used to for
other tasks such as the extraction of metadata for resource discovery.
Extracting preservation metadata is a two-stage process. In the first
phase each incoming file is processed by the adapters until one of
the adapters recognises the file type. That adapter extracts data
from the header fields of the file and generates an Extensible Markup
Language (XML) file. In the second phase an Extensible Stylesheet
Language (XSL) transformation converts the internal XML file into
an XML file in a useful format. The Tool currently outputs the XML
file using the NLNZ preservation metadata data model schema. The
Tool is written in Java and XML and is distributed under the Apache
Public License (version 2). Developers may be interested in extending
some of the key components of the Metadata Extraction Tool such as
extending existing adapters or developing new ones to process other
file types, or creating new XSLT files to generate different XML
output formats. More Information
W3C Last Call Working Draft for XProc: An XML Pipeline Language
Members of the W3C XML Processing Model Working Group have released
a Last Call Working Draft for "XProc: An XML Pipeline Language,"
inviting public comment through 24-October-2007. Used to control and
organize the flow of documents, the XProc language standardizes
interactions, inputs and outputs for transformations for the large
group of specifications such as XSLT, XML Schema, XInclude and
Canonical XML that operate on and produce XML documents. An XML
Pipeline specifies a sequence of operations to be performed on one
or more XML documents. Pipelines generally accept one or more XML
documents as input and produce one or more XML documents as output.
Pipelines are made up of simple steps which perform atomic operations
on XML documents and constructs similar to conditionals, loops and
exception handlers which control which steps are executed. The Working
Group considers this specification complete and finished. The scope of
editorial changes since the last working draft has overwhelmed the
utility of a [color-coded] draft with revision markup. Significant
changes since the last working draft: (1) The namespace URIs have
changed. The Working Group has no plans to change them again in the
life of this specification. (2) The management of in-scope namespaces
and XPath context is described much more carefully. (3) Namespace
fixup on output documents is discussed. (4) Management of iteration
counting has changed. The 'p:iteration-position' function was renamed
to 'p:iteration-count' and 'p:iteration-size' was removed. (5) Added
'p:add-attribute', 'p:add-xml-base', 'p:directory-list',
'p:make-absolute-uris', and 'p:pack'. Renamed 'p:equal' to 'p:compare'.
(6) Added a MIME type and fragment identifier syntax. More Information
a Last Call Working Draft for "XProc: An XML Pipeline Language,"
inviting public comment through 24-October-2007. Used to control and
organize the flow of documents, the XProc language standardizes
interactions, inputs and outputs for transformations for the large
group of specifications such as XSLT, XML Schema, XInclude and
Canonical XML that operate on and produce XML documents. An XML
Pipeline specifies a sequence of operations to be performed on one
or more XML documents. Pipelines generally accept one or more XML
documents as input and produce one or more XML documents as output.
Pipelines are made up of simple steps which perform atomic operations
on XML documents and constructs similar to conditionals, loops and
exception handlers which control which steps are executed. The Working
Group considers this specification complete and finished. The scope of
editorial changes since the last working draft has overwhelmed the
utility of a [color-coded] draft with revision markup. Significant
changes since the last working draft: (1) The namespace URIs have
changed. The Working Group has no plans to change them again in the
life of this specification. (2) The management of in-scope namespaces
and XPath context is described much more carefully. (3) Namespace
fixup on output documents is discussed. (4) Management of iteration
counting has changed. The 'p:iteration-position' function was renamed
to 'p:iteration-count' and 'p:iteration-size' was removed. (5) Added
'p:add-attribute', 'p:add-xml-base', 'p:directory-list',
'p:make-absolute-uris', and 'p:pack'. Renamed 'p:equal' to 'p:compare'.
(6) Added a MIME type and fragment identifier syntax. More Information
Boost Web Service Performance in JAX-WS with Fast Infoset
XML message transmission and processing are at the foundation of the
web service programming model. To effectively improve web service
performance, you need to reduce the overhead associated with parsing,
serializing, and transmitting XML-based data. Fast Infoset is an
open, standards-based solution for doing just that. It specifies
several techniques for minimizing the size of XML encodings and
maximizing the speed of creating and processing those encodings.
Using these techniques, you can tune Fast Infoset encoding according
to your specific domain requirements, whether that means favoring
compression over processing performance or requiring efficient
compression but not at the expense of processing performance. In
general, Fast Infoset documents are smaller and therefore faster to
process than corresponding XML representations. As such, they can
be very useful when the size and processing time of XML documents
are a concern. For an example, the W3C's XML Binary Characterization
Working Group has identified two such use cases: (1) Web services
for small devices that have bandwidth constraints, and (2) Web
services within an enterprise that has high throughput requirements.
This article introduces Fast Infoset, demonstrates it in an example
based on the reference implementation of JAX-WS, and presents some
empirical data comparing the effects of Fast Infoset and MTOM/XOP
(another technology for optimizing XML data transmission and
processing) on web service performance. Fast Infoset is on its way
to being widely supported in various platforms and frameworks such
as Microsoft .NET and .NET CF, Sun GlassFish, BEA WebLogic, IBM SDK
for Java 6.0, and TMax Soft JEUS 6, as well as in the Linux, Solaris,
and Win32 operating systems, where Fast Infoset support in JAX-WS
is based on the FI project at java.net. Developers -- particularly
those in the SOA domain -- should explore this promising technology
and learn how they can work more efficiently with XML to deliver
high-performing web services.
web service programming model. To effectively improve web service
performance, you need to reduce the overhead associated with parsing,
serializing, and transmitting XML-based data. Fast Infoset is an
open, standards-based solution for doing just that. It specifies
several techniques for minimizing the size of XML encodings and
maximizing the speed of creating and processing those encodings.
Using these techniques, you can tune Fast Infoset encoding according
to your specific domain requirements, whether that means favoring
compression over processing performance or requiring efficient
compression but not at the expense of processing performance. In
general, Fast Infoset documents are smaller and therefore faster to
process than corresponding XML representations. As such, they can
be very useful when the size and processing time of XML documents
are a concern. For an example, the W3C's XML Binary Characterization
Working Group has identified two such use cases: (1) Web services
for small devices that have bandwidth constraints, and (2) Web
services within an enterprise that has high throughput requirements.
This article introduces Fast Infoset, demonstrates it in an example
based on the reference implementation of JAX-WS, and presents some
empirical data comparing the effects of Fast Infoset and MTOM/XOP
(another technology for optimizing XML data transmission and
processing) on web service performance. Fast Infoset is on its way
to being widely supported in various platforms and frameworks such
as Microsoft .NET and .NET CF, Sun GlassFish, BEA WebLogic, IBM SDK
for Java 6.0, and TMax Soft JEUS 6, as well as in the Linux, Solaris,
and Win32 operating systems, where Fast Infoset support in JAX-WS
is based on the FI project at java.net. Developers -- particularly
those in the SOA domain -- should explore this promising technology
and learn how they can work more efficiently with XML to deliver
high-performing web services.
Wednesday, September 19, 2007
Integrate XForms with the Google Web Toolkit
The Google Web Tookit (GWT) has become a very popular way to develop
Ajax applications. It allows Java developers to rapidly create Ajax
applications by leveraging their knowledge of Java technology without
requiring any knowledge of JavaScript. XForms represents an evolution
in the HTML standard, and allows for simple constructs to create
complex, dynamic behavior. Both GWT and XForms are powerful enough to
provide complete solutions to many problems. This four-part series
demonstrates how to use the Google Web Toolkit (GWT) and XForms
together to create a dynamic Web application. Part 1 starts with a
bottom-up approach to the problem of using GWT and XForms together.
It takes a look at some of the underpinnings of each technology,
examining the common ground between them that will allow for their
peaceful coexistence. This will lay the foundation for developing a
Web application that uses both GWT and XForms together. XForms is a
standards-based technology that will be central in the next generation
of the HTML specification. XForms uses the familiar
Model-View-Controller paradigm. The key to XForms is separating
data from the physical view of the data. Sound like a familiar
concept? With the data separated, it can be viewed in any HTML-way
you can imagine. It can also be bound to form elements to allow for
a seamless way to enter data and to edit existing data. With a model
declared, you can easily create views from the data encapsulated by
the model. XForms contains numerous common controls for working with
model instance data. Each control can reference data from the model's
instance data. The instance data is in an XML format, so we can easily
navigate and reference it arbitrarily using XPath. XForms supports
the full XPath 2.0 specification. More Information
Ajax applications. It allows Java developers to rapidly create Ajax
applications by leveraging their knowledge of Java technology without
requiring any knowledge of JavaScript. XForms represents an evolution
in the HTML standard, and allows for simple constructs to create
complex, dynamic behavior. Both GWT and XForms are powerful enough to
provide complete solutions to many problems. This four-part series
demonstrates how to use the Google Web Toolkit (GWT) and XForms
together to create a dynamic Web application. Part 1 starts with a
bottom-up approach to the problem of using GWT and XForms together.
It takes a look at some of the underpinnings of each technology,
examining the common ground between them that will allow for their
peaceful coexistence. This will lay the foundation for developing a
Web application that uses both GWT and XForms together. XForms is a
standards-based technology that will be central in the next generation
of the HTML specification. XForms uses the familiar
Model-View-Controller paradigm. The key to XForms is separating
data from the physical view of the data. Sound like a familiar
concept? With the data separated, it can be viewed in any HTML-way
you can imagine. It can also be bound to form elements to allow for
a seamless way to enter data and to edit existing data. With a model
declared, you can easily create views from the data encapsulated by
the model. XForms contains numerous common controls for working with
model instance data. Each control can reference data from the model's
instance data. The instance data is in an XML format, so we can easily
navigate and reference it arbitrarily using XPath. XForms supports
the full XPath 2.0 specification. More Information
W3C Last Call Working Draft for MTOM Policy Assertion
W3C's XML Protocol Working Group has released a First Public and Last
Call Working Draft for the "MTOM Serialization Policy Assertion 1.1"
specification. The Last Call period ends 15-October-2007. The
specification describes a domain-specific policy assertion that
indicates endpoint support of the optimized MIME multipart/related
serialization of SOAP messages defined in section 3 of the "SOAP
Message Transmission Optimization Mechanism (MTOM)" specification.
This policy assertion can be specified within a policy alternative
as defined in "Web Services Policy 1.5 - Framework (WS-Policy)" and
attached to a WSDL description as defined in "Web Services Policy
1.5 - Attachment (WS-PolicyAttachment)." For backwards compatibility,
the policy assertion can also be used in conjunction with the SOAP
1.1 Binding for MTOM 1.0 Member Submission. The document defined a
namespace URI: INFORMATION MTOM
describes an abstract feature and a concrete implementation of it for
optimizing the transmission and/or wire format of SOAP messages
(to optimize hop-by-hop exchanges between SOAP nodes).
Call Working Draft for the "MTOM Serialization Policy Assertion 1.1"
specification. The Last Call period ends 15-October-2007. The
specification describes a domain-specific policy assertion that
indicates endpoint support of the optimized MIME multipart/related
serialization of SOAP messages defined in section 3 of the "SOAP
Message Transmission Optimization Mechanism (MTOM)" specification.
This policy assertion can be specified within a policy alternative
as defined in "Web Services Policy 1.5 - Framework (WS-Policy)" and
attached to a WSDL description as defined in "Web Services Policy
1.5 - Attachment (WS-PolicyAttachment)." For backwards compatibility,
the policy assertion can also be used in conjunction with the SOAP
1.1 Binding for MTOM 1.0 Member Submission. The document defined a
namespace URI: INFORMATION MTOM
describes an abstract feature and a concrete implementation of it for
optimizing the transmission and/or wire format of SOAP messages
(to optimize hop-by-hop exchanges between SOAP nodes).
Tuesday, September 18, 2007
The Extensible Neuroimaging Archive Toolkit
The Extensible Neuroimaging Archive Toolkit (XNAT) is a software
platform designed to facilitate common management and productivity
tasks for neuroimaging and associated data. In particular, XNAT
enables qualitycontrol procedures and provides secure access to and
storage of data. XNAT follows a three-tiered architecture that includes
a data archive, user interface, and middleware engine. The XNAT
framework relies heavily on XML and XML Schema for its data
representation, security system, and generation of user interface
content. XML provides a powerful tool for building extensible data
models. This extensibility is particularly important in rapidly
advancing fields like neuroimaging, where the managed data types are
likely to change and evolve quickly. XML Schema has become the
standard language for defining open and extensible XML data formats.
As a result, many biomedical organizations have developed or are
currently developing standards in XML. XNAT uses a hybrid storage
architecture that leverages the strengths of XML, relational
databases, and standard file systems. Data stored by XNAT are modeled
in XML using XML Schema. From the XSDs supplied by a site, XNAT
generates a corresponding relational database that actually stores
all of the nonimage data. XNAT automatically imports and exports
compliant XML to and from the generated database. Image data remain
as flat files in their native format (e.g., DICOM) on the file
system. These files are represented as URI links in the database
and XML. This hybrid XML/relational/file system architecture has
a number of advantages. By building on a data model in the XML domain,
the XNAT platform is able to generate a great deal of content from
the known structure of XML documents and XNAT sites can easily
utilize the growing set of XML-based services and technologies. By
storing the text and numeric data in a relational database, the
typical drawbacks of XML data representations -- inefficient storage
and querying -- are avoided. By storing the image data in flat files,
the cumbersome nature of binary types in XML and databases is avoided
and the images can be directly accessed by users and applications. CLICK HERE See also the XNAT project web site: More Info
platform designed to facilitate common management and productivity
tasks for neuroimaging and associated data. In particular, XNAT
enables qualitycontrol procedures and provides secure access to and
storage of data. XNAT follows a three-tiered architecture that includes
a data archive, user interface, and middleware engine. The XNAT
framework relies heavily on XML and XML Schema for its data
representation, security system, and generation of user interface
content. XML provides a powerful tool for building extensible data
models. This extensibility is particularly important in rapidly
advancing fields like neuroimaging, where the managed data types are
likely to change and evolve quickly. XML Schema has become the
standard language for defining open and extensible XML data formats.
As a result, many biomedical organizations have developed or are
currently developing standards in XML. XNAT uses a hybrid storage
architecture that leverages the strengths of XML, relational
databases, and standard file systems. Data stored by XNAT are modeled
in XML using XML Schema. From the XSDs supplied by a site, XNAT
generates a corresponding relational database that actually stores
all of the nonimage data. XNAT automatically imports and exports
compliant XML to and from the generated database. Image data remain
as flat files in their native format (e.g., DICOM) on the file
system. These files are represented as URI links in the database
and XML. This hybrid XML/relational/file system architecture has
a number of advantages. By building on a data model in the XML domain,
the XNAT platform is able to generate a great deal of content from
the known structure of XML documents and XNAT sites can easily
utilize the growing set of XML-based services and technologies. By
storing the text and numeric data in a relational database, the
typical drawbacks of XML data representations -- inefficient storage
and querying -- are avoided. By storing the image data in flat files,
the cumbersome nature of binary types in XML and databases is avoided
and the images can be directly accessed by users and applications. CLICK HERE See also the XNAT project web site: More Info
Monday, September 17, 2007
Extended XQuery for SOA
In a services-oriented architecture (SOA), a business process is
implemented as a web service that programs (orchestrates in SOA
terminology) other web services. An orchestrator web service is usually
coded in a language outside the XML domain (e.g., Java), and in this
context XQuery is used only to query and transform data -- not to
orchestrate other web services. However, here we show how a few
extensions to XQuery give it the additional role of web service
orchestrator, allowing this XML-domain-specific language to implement
all the steps in a complex SOA processes. This article explains the
choice of extensions, outlines their implementation for a specific
XQuery processor, and shows how extended XQuery was used to create web
services to process complex financial data. While web services created
in this way are usable within any SOA, they can also act as the
highest-level orchestrators in what some authors refer to as SOA lite.
The extensions applied here for XQuery work equally well for XSLT 2.0...
SOA Lite refers to a SOA which has all services wrapped in web service
interfaces and in which some web services are specially created for
orchestration. Therefore, SOA lite need not implement the many so-called
governance services, such as for security, discovery, and testing;
neither would it involve using an engine to automatically generate the
orchestrator web service from a BPEL document. However, SOA lite has
the core features of a full SOA, and extended XQuery is ideally suited
to create the orchestrator web services for it. We have extended XQuery
to give it a new role as web service orchestrator, so that complex web
services, involving validation and orchestration, can be implemented
entirely in this XML-domain-specific language. Extended XQuery, as
applied to the processing of structured finance deals, has greatly
simplified the code engineering and given very good performance. We see
extended XQuery as suitable for SOA lite or as part of any SOA. These
extensions work identically for XSLT 2.0. More Information See also W3C XML Query (XQuery): XQuery
implemented as a web service that programs (orchestrates in SOA
terminology) other web services. An orchestrator web service is usually
coded in a language outside the XML domain (e.g., Java), and in this
context XQuery is used only to query and transform data -- not to
orchestrate other web services. However, here we show how a few
extensions to XQuery give it the additional role of web service
orchestrator, allowing this XML-domain-specific language to implement
all the steps in a complex SOA processes. This article explains the
choice of extensions, outlines their implementation for a specific
XQuery processor, and shows how extended XQuery was used to create web
services to process complex financial data. While web services created
in this way are usable within any SOA, they can also act as the
highest-level orchestrators in what some authors refer to as SOA lite.
The extensions applied here for XQuery work equally well for XSLT 2.0...
SOA Lite refers to a SOA which has all services wrapped in web service
interfaces and in which some web services are specially created for
orchestration. Therefore, SOA lite need not implement the many so-called
governance services, such as for security, discovery, and testing;
neither would it involve using an engine to automatically generate the
orchestrator web service from a BPEL document. However, SOA lite has
the core features of a full SOA, and extended XQuery is ideally suited
to create the orchestrator web services for it. We have extended XQuery
to give it a new role as web service orchestrator, so that complex web
services, involving validation and orchestration, can be implemented
entirely in this XML-domain-specific language. Extended XQuery, as
applied to the processing of structured finance deals, has greatly
simplified the code engineering and given very good performance. We see
extended XQuery as suitable for SOA lite or as part of any SOA. These
extensions work identically for XSLT 2.0. More Information See also W3C XML Query (XQuery): XQuery
Introduction to Voice XML Part 5: Voice XML Meets Web 2.0
In this final Voice XML article installment the author takes a look at
how voice can add a new rich dimension to your Web applications,
especially those centered around XML. With Web 2.0 and mashups on the
rise, adding Voice XML to the mix lets you pull and push Web-based
information to your users wherever they may roam. JavaScript
(ECMAScript) has been getting a lot of attention lately in the Web 2.0
zone as the key ingredient for doing client-side AJAX. The good news
is that much of that JavaScript expertise can be leveraged in your
Voice XML applications. One of the benefits of ECMAScript is that you
can access Voice XML variables within ECMAScript. Elements that accept
the 'expr' attribute can use arbitrary ECMAScript code to generate a
value at runtime. And you can abstract your commonly used ECMAScript
functions into functions or libraries to support reuse in your Voice
XML pages. Some key things to note about JavaScript include the
following: (1) Voice XML variables are equivalent to ECMAScript
variables. Voice XML variables can be passed to JavaScript functions.
Values returned from functions can be stored in Voice XML variables.
(2) The expr attribute available with many tags can refer not only to
Voice XML or ECMAScript variables but also can include ECMAScript
function call expressions. (3) ECMAScript can be placed inline in the
Voice XML document using the 'script' element, or scripts can be
loaded from a URI. (4) ECMAScript functions follow the familiar scope
hierarchy... Dynamic Voice XML takes us to a higher level, enabling us
to create more robust and up-to-date applications by dynamically
creating Voice XML using server data. To do dynamic Voice XML we need
a server technology to trigger the conversion of data in a data
repository to Voice XML. Server technologies capable of handling this
include Java Servlets, Java Server Pages (JSPs), Active Server Pages
(ASPs), PHP scripts and many other server-side scripting technologies.
The basic idea is to use a program to extract data from a repository
and generate a valid Voice XML document. While this might sound complex,
involving setting up a relational database and writing code to extract
and generate XML, [but] in XML-land and have at our disposal a variety
of XML tools -- one of the most powerful being XSLT, the XML transform
language... The release of the Voice XML 2.0 standard has been
instrumental in giving developers a cross-platform way not only to
build stand-alone voice applications but also to integrate voice into
a broad range of web service-based applications. Harnessing the speech
recognition and text to speech technologies available from voice
providers, Voice XML enables developers to build powerful
voice-controlled apps by defining their own domain-specific grammars
by writing their own application-specific JavaScript and by submitting
data from Voice XML to server-side gateways that can establish
connections with any other server or web service available across the
Internet. More Information
how voice can add a new rich dimension to your Web applications,
especially those centered around XML. With Web 2.0 and mashups on the
rise, adding Voice XML to the mix lets you pull and push Web-based
information to your users wherever they may roam. JavaScript
(ECMAScript) has been getting a lot of attention lately in the Web 2.0
zone as the key ingredient for doing client-side AJAX. The good news
is that much of that JavaScript expertise can be leveraged in your
Voice XML applications. One of the benefits of ECMAScript is that you
can access Voice XML variables within ECMAScript. Elements that accept
the 'expr' attribute can use arbitrary ECMAScript code to generate a
value at runtime. And you can abstract your commonly used ECMAScript
functions into functions or libraries to support reuse in your Voice
XML pages. Some key things to note about JavaScript include the
following: (1) Voice XML variables are equivalent to ECMAScript
variables. Voice XML variables can be passed to JavaScript functions.
Values returned from functions can be stored in Voice XML variables.
(2) The expr attribute available with many tags can refer not only to
Voice XML or ECMAScript variables but also can include ECMAScript
function call expressions. (3) ECMAScript can be placed inline in the
Voice XML document using the 'script' element, or scripts can be
loaded from a URI. (4) ECMAScript functions follow the familiar scope
hierarchy... Dynamic Voice XML takes us to a higher level, enabling us
to create more robust and up-to-date applications by dynamically
creating Voice XML using server data. To do dynamic Voice XML we need
a server technology to trigger the conversion of data in a data
repository to Voice XML. Server technologies capable of handling this
include Java Servlets, Java Server Pages (JSPs), Active Server Pages
(ASPs), PHP scripts and many other server-side scripting technologies.
The basic idea is to use a program to extract data from a repository
and generate a valid Voice XML document. While this might sound complex,
involving setting up a relational database and writing code to extract
and generate XML, [but] in XML-land and have at our disposal a variety
of XML tools -- one of the most powerful being XSLT, the XML transform
language... The release of the Voice XML 2.0 standard has been
instrumental in giving developers a cross-platform way not only to
build stand-alone voice applications but also to integrate voice into
a broad range of web service-based applications. Harnessing the speech
recognition and text to speech technologies available from voice
providers, Voice XML enables developers to build powerful
voice-controlled apps by defining their own domain-specific grammars
by writing their own application-specific JavaScript and by submitting
data from Voice XML to server-side gateways that can establish
connections with any other server or web service available across the
Internet. More Information
Friday, September 14, 2007
Using XML Schema 1.0: When Can Language Components be Removed
When can content be removed from a content model? The answer depends
what we mean by "remove". The first aspect is whether the content is
completely removed or if the minimum and/or maximum number of
occurences of the content is reduced though possibly still allowed.
The second aspect is whether the content that is removed from the
definition is still allowed to occur in documents. In general, a newer
language can be forwards and backwards compatible with an older
language if the component is removed and still accepted. A newer
language can be forwards compatible with an older language if the
component is optional and is removed and not accepted. A newer
language can be backwards compatible with an older language if the
component is optional and is removed and not accepted only if the
producers do not produce the optional component. A newer language
cannot be backwards or forwards compatible with an older language
if the component is required. More Information
what we mean by "remove". The first aspect is whether the content is
completely removed or if the minimum and/or maximum number of
occurences of the content is reduced though possibly still allowed.
The second aspect is whether the content that is removed from the
definition is still allowed to occur in documents. In general, a newer
language can be forwards and backwards compatible with an older
language if the component is removed and still accepted. A newer
language can be forwards compatible with an older language if the
component is optional and is removed and not accepted. A newer
language can be backwards compatible with an older language if the
component is optional and is removed and not accepted only if the
producers do not produce the optional component. A newer language
cannot be backwards or forwards compatible with an older language
if the component is required. More Information
jQuery 1.2 Release is "Massive"
John Resig, the creator and lead developer of the jQuery JavaScript
library has announced the release of jQuery 1.2, calling it a
"massive new release ... that's been a long time in the making."
Version 1.1 was released last January. The jQuery library is a
collection of UI tools for web app development that includes
simplified DOM traversal, creating animation, event handling, and
so on. jQuery's dot notation "chainability" allows developers to
write very concise code. jQuery 1.2 includes several new features.
Additionally several inefficient or confusing features from 1.1
were dropped from the latest release, as was XPath selector support.
Any projects that require these 1.1 features can be augmented with
the jQuery 1.1 Compatibility Plugin and the XPath Selector Plugin.
The compressed jQuery 1.2 library is 14 kb. The uncompressed version
is 77 kb. More Information See also Getting Started with jQuery: http://www.ddj.com/java/201000935
library has announced the release of jQuery 1.2, calling it a
"massive new release ... that's been a long time in the making."
Version 1.1 was released last January. The jQuery library is a
collection of UI tools for web app development that includes
simplified DOM traversal, creating animation, event handling, and
so on. jQuery's dot notation "chainability" allows developers to
write very concise code. jQuery 1.2 includes several new features.
Additionally several inefficient or confusing features from 1.1
were dropped from the latest release, as was XPath selector support.
Any projects that require these 1.1 features can be augmented with
the jQuery 1.1 Compatibility Plugin and the XPath Selector Plugin.
The compressed jQuery 1.2 library is 14 kb. The uncompressed version
is 77 kb. More Information See also Getting Started with jQuery: http://www.ddj.com/java/201000935
Nokia Revamps Mobile Map Service
Nokia has added new features to its mobile mapping application, with
a revamped user interface and a status indicator to alert users when
they're connected to a GPS satellite. From the web site: "Nokia Maps
needs either a built-in or external GPS receiver for real-time
navigation or finding your location. Even without GPS you can use Nokia
Maps to browse maps, locations, points of interest and to plan a route.
If your mobile device has a built-in GPS receiver such as the Nokia N95,
you are good to go as it is with Nokia Maps. You can also use Nokia
Maps with an external GPS receiver. Bluetooth technology creates a
wireless connection between the GPS receiver and your device. With
Nokia Maps, there are free maps available for more than 150 countries,
with navigation supported for over 30 countries. In order to get a
satellite fix, you need to be outside with a clear view to the sky.
With navigable maps, you can use the full range of navigation features,
such as real-time tracking of your route and voice guidance. Routing:
With Nokia Maps you can easily plan a route to your destination. Simply
indicate your starting point, and your destination, and Nokia Maps will
plot your course with easy-to-read directional arrows. More than 3000
city guides available for Nokia Maps. If you would like Nokia Maps to
give you clear voice and visual guidance turn-by-turn, just upgrade
your application with the extra voice-guided navigation service. Tagging
lets you save the spot on your map as a landmark and then send it to
your friends. Whether you are meeting up or recommending a restaurant,
when you've found the desired place on the map you can send it quickly
and easily, via MMS, SMS, email, Bluetooth or infrared."
a revamped user interface and a status indicator to alert users when
they're connected to a GPS satellite. From the web site: "Nokia Maps
needs either a built-in or external GPS receiver for real-time
navigation or finding your location. Even without GPS you can use Nokia
Maps to browse maps, locations, points of interest and to plan a route.
If your mobile device has a built-in GPS receiver such as the Nokia N95,
you are good to go as it is with Nokia Maps. You can also use Nokia
Maps with an external GPS receiver. Bluetooth technology creates a
wireless connection between the GPS receiver and your device. With
Nokia Maps, there are free maps available for more than 150 countries,
with navigation supported for over 30 countries. In order to get a
satellite fix, you need to be outside with a clear view to the sky.
With navigable maps, you can use the full range of navigation features,
such as real-time tracking of your route and voice guidance. Routing:
With Nokia Maps you can easily plan a route to your destination. Simply
indicate your starting point, and your destination, and Nokia Maps will
plot your course with easy-to-read directional arrows. More than 3000
city guides available for Nokia Maps. If you would like Nokia Maps to
give you clear voice and visual guidance turn-by-turn, just upgrade
your application with the extra voice-guided navigation service. Tagging
lets you save the spot on your map as a landmark and then send it to
your friends. Whether you are meeting up or recommending a restaurant,
when you've found the desired place on the map you can send it quickly
and easily, via MMS, SMS, email, Bluetooth or infrared."
Microsoft and Sun Support Each Other in Virtualized Environments
Microsoft will support Solaris as a guest with its virtualization
products, and Sun will do the same with Windows as a guest in Sun's
virtualization offerings, the companies announced on September 12,
2007. Sun also announced that is now an official Windows Server OEM
with its x64 server line, the companies said on Wednesday. The two
will begin selling jointly Windows Server 2003 running on Sun hardware.
In 2004, Sun and Microsoft announced Windows certification for Sun's
Xeon servers and said that it expected to seek and obtain Windows
certification for Sun's Opteron-based servers, as well. These
announcements were all part of an extension of the Microsoft-Sun
partnership agreement originally announced in 2004. Solaris already
has built-in virtualization with Solaris. Microsoft is planning to
add built-in virtualization to Windows Server with its Windows Server
Virtualization ('Viridian') hypervisor. Microsoft will deliver a first
test release (Community Technology Preview) of Viridian to Windows
Server testers -- most likely next week according to sources -- as
a built-in part of Windows Server 2008 Release Candidate (RC) 0. When
Microsoft ships Windows Server 2008 in the first quarter of 2008, the
product will include a beta version of Viridian... [Microsoft is]
constructing an Interoperability Center on the Redmond campus that
will be focused around Windows on Sun x64 systems, as well as on other
"joint Sun/Microsoft solutions in areas such as databases, e-mail and
messaging, virtualization, and Remote Desktop Protocol (RDP) support
in Sun Ray thin clients." Microsoft and Sun pledged to work together
in other areas, including collaborate to advance the worldwide
deployment of the Microsoft Mediaroom IPTV and multimedia platform on
Sun server and storage systems. [Announcement says:] The
Interoperability Center on Microsoft's Redmond campus "will include
a demonstration area for Sun x64 systems, act as a working lab for
Windows on Sun benchmarks and sales tools, and support customers
running proofs of concept for projects focused on Windows on Sun x64
systems, including joint Sun/Microsoft solutions in areas such as
databases, e-mail and messaging, virtualization, and Remote Desktop
Protocol (RDP) support in Sun Ray thin clients. The Interoperability
Center will expand Sun's presence on the Microsoft main campus, adding
to existing Sun systems showcased and customer-tested in the Microsoft
Enterprise Engineering Center."
products, and Sun will do the same with Windows as a guest in Sun's
virtualization offerings, the companies announced on September 12,
2007. Sun also announced that is now an official Windows Server OEM
with its x64 server line, the companies said on Wednesday. The two
will begin selling jointly Windows Server 2003 running on Sun hardware.
In 2004, Sun and Microsoft announced Windows certification for Sun's
Xeon servers and said that it expected to seek and obtain Windows
certification for Sun's Opteron-based servers, as well. These
announcements were all part of an extension of the Microsoft-Sun
partnership agreement originally announced in 2004. Solaris already
has built-in virtualization with Solaris. Microsoft is planning to
add built-in virtualization to Windows Server with its Windows Server
Virtualization ('Viridian') hypervisor. Microsoft will deliver a first
test release (Community Technology Preview) of Viridian to Windows
Server testers -- most likely next week according to sources -- as
a built-in part of Windows Server 2008 Release Candidate (RC) 0. When
Microsoft ships Windows Server 2008 in the first quarter of 2008, the
product will include a beta version of Viridian... [Microsoft is]
constructing an Interoperability Center on the Redmond campus that
will be focused around Windows on Sun x64 systems, as well as on other
"joint Sun/Microsoft solutions in areas such as databases, e-mail and
messaging, virtualization, and Remote Desktop Protocol (RDP) support
in Sun Ray thin clients." Microsoft and Sun pledged to work together
in other areas, including collaborate to advance the worldwide
deployment of the Microsoft Mediaroom IPTV and multimedia platform on
Sun server and storage systems. [Announcement says:] The
Interoperability Center on Microsoft's Redmond campus "will include
a demonstration area for Sun x64 systems, act as a working lab for
Windows on Sun benchmarks and sales tools, and support customers
running proofs of concept for projects focused on Windows on Sun x64
systems, including joint Sun/Microsoft solutions in areas such as
databases, e-mail and messaging, virtualization, and Remote Desktop
Protocol (RDP) support in Sun Ray thin clients. The Interoperability
Center will expand Sun's presence on the Microsoft main campus, adding
to existing Sun systems showcased and customer-tested in the Microsoft
Enterprise Engineering Center."
BEA Upgrades Application Server with SOA, Web 2.0 Capabilities
BEA Systems will fit its Weblogic Server Java application server with
improvements geared to Web 2.0, SOA and interoperability with
Microsoft's .Net platform, the company said at the BEAWorld San
Francisco conference on Wednesday. WebLogic Server 10.3 also features
a new modular approach in which users can selectively download only
components they want. The upgrade will be offered as a technology
preview this fall with general availability set for next year. The
application server's Web 2.0 capabilities are enabled through enhanced
support for AJAX (Asynchronous JavaScript and XML). Specifically, a
publish-and-subscribe engine within WebLogic Server will provide live
updates to AJAX and Flex clients. In the SOA realm, WebLogic Server
10.3 backs SAML (Security Assertion Markup Language) and improvements
to the Java API for XML Web Services. Blake Connell, Director of
Product Marketing for WebLogic Server: "The big point in SOA is
support for SAML 2.0, which provides Web single sign-on. SAML enables
people to sign on securely with a Web client and then have access to
other systems without having to keep re-logging in. Java API for XML
Web Services, meanwhile, is the specification used for writing Web
services capabilities on top of the application server. To accommodate
.Net applications, WebLogic Server 10.3 will feature a JMS (Java
Message Service) C# client. With this component, users who have
deployed .Net systems but want to standardize on JMS as a messaging
backbone can do that. Leveraging BEA's microServices Architecture,
version 10.3 allows for componentizing of WebLogic Server. For example,
users could leave out the Java development kit or Enterprise JavaBeans
and JMS capabilities if they do not need these."
improvements geared to Web 2.0, SOA and interoperability with
Microsoft's .Net platform, the company said at the BEAWorld San
Francisco conference on Wednesday. WebLogic Server 10.3 also features
a new modular approach in which users can selectively download only
components they want. The upgrade will be offered as a technology
preview this fall with general availability set for next year. The
application server's Web 2.0 capabilities are enabled through enhanced
support for AJAX (Asynchronous JavaScript and XML). Specifically, a
publish-and-subscribe engine within WebLogic Server will provide live
updates to AJAX and Flex clients. In the SOA realm, WebLogic Server
10.3 backs SAML (Security Assertion Markup Language) and improvements
to the Java API for XML Web Services. Blake Connell, Director of
Product Marketing for WebLogic Server: "The big point in SOA is
support for SAML 2.0, which provides Web single sign-on. SAML enables
people to sign on securely with a Web client and then have access to
other systems without having to keep re-logging in. Java API for XML
Web Services, meanwhile, is the specification used for writing Web
services capabilities on top of the application server. To accommodate
.Net applications, WebLogic Server 10.3 will feature a JMS (Java
Message Service) C# client. With this component, users who have
deployed .Net systems but want to standardize on JMS as a messaging
backbone can do that. Leveraging BEA's microServices Architecture,
version 10.3 allows for componentizing of WebLogic Server. For example,
users could leave out the Java development kit or Enterprise JavaBeans
and JMS capabilities if they do not need these."
Software AG Releases webMethods Version 7.1
A month after Software AG unveiled its roadmap for converging webMethods
products, it is releasing the first of the new or enhanced offerings.
The new webMethods 7.1 release covers enterprise service bus (ESB),
business process management (BPM), and the first extension of its BAM
capability to other parts of the stack. the ESB uses the webMethods
offering a starting point and retrofits the BPEL orchestration
capability from the old Software AG Crossvision Service Orchestrator
product. Additionally, the new version of webMethods BPM adds a number
of new functions and enhancements. For instance, the new version beefs
up the process simulation capability. Until now, the simulation only
displayed potential bottlenecks, but didn't provide any key performance
indicators (KPIs) that would reveal insight on the source or impact of
those bottlenecks. The new version adds that granularity including
visualization, scenario management, bottleneck identification,
multi-process simulation, reporting, round tripping, and versioning.
Other enhancements to webMethods BPM include ability to customize KPIs
to reflect methodologies such as Six Sigma or Lean Production, some
new service level agreement (SLA) management tools, and calendaring
integration with Microsoft Outlook and Lotus Notes. One of the more
interesting parts of the announcement is how Software AG is beginning
to seed some of webMethods' Optimize business activity monitoring
(BAM) dashboard functionality back into other parts of the stack. In
this case, Optimize dashboards are being added to webMethods B2B
trading partner management piece, which coincidentally is the piece
around which the original webMethods was founded. [CBR View:]
"Maintaining service levels is why the IT operations folks are buying
into ITIL and related analytic tools and dashboards of their own.
That's presumably the chasm that HP Software is attempting to bridge
following its reverse acquisition of Mercury... But at this point,
service levels to IT operations may cover parameters such as incident
resolution response time or server availability. In some cases,
there are attempts on the part of the HPs, BMCs, CAs, and IBM Tivolis
of the world to extend that to business services... If you buy into
what the SOA and ITIL-oriented vendors are promising, you may start
seeing lots of parallel dashboards and parallel islands of service
level management automation emerging, each covering their own domain
or slice of the world."
products, it is releasing the first of the new or enhanced offerings.
The new webMethods 7.1 release covers enterprise service bus (ESB),
business process management (BPM), and the first extension of its BAM
capability to other parts of the stack. the ESB uses the webMethods
offering a starting point and retrofits the BPEL orchestration
capability from the old Software AG Crossvision Service Orchestrator
product. Additionally, the new version of webMethods BPM adds a number
of new functions and enhancements. For instance, the new version beefs
up the process simulation capability. Until now, the simulation only
displayed potential bottlenecks, but didn't provide any key performance
indicators (KPIs) that would reveal insight on the source or impact of
those bottlenecks. The new version adds that granularity including
visualization, scenario management, bottleneck identification,
multi-process simulation, reporting, round tripping, and versioning.
Other enhancements to webMethods BPM include ability to customize KPIs
to reflect methodologies such as Six Sigma or Lean Production, some
new service level agreement (SLA) management tools, and calendaring
integration with Microsoft Outlook and Lotus Notes. One of the more
interesting parts of the announcement is how Software AG is beginning
to seed some of webMethods' Optimize business activity monitoring
(BAM) dashboard functionality back into other parts of the stack. In
this case, Optimize dashboards are being added to webMethods B2B
trading partner management piece, which coincidentally is the piece
around which the original webMethods was founded. [CBR View:]
"Maintaining service levels is why the IT operations folks are buying
into ITIL and related analytic tools and dashboards of their own.
That's presumably the chasm that HP Software is attempting to bridge
following its reverse acquisition of Mercury... But at this point,
service levels to IT operations may cover parameters such as incident
resolution response time or server availability. In some cases,
there are attempts on the part of the HPs, BMCs, CAs, and IBM Tivolis
of the world to extend that to business services... If you buy into
what the SOA and ITIL-oriented vendors are promising, you may start
seeing lots of parallel dashboards and parallel islands of service
level management automation emerging, each covering their own domain
or slice of the world."
Intalio BPEL Engine Becomes Apache Top Level Project
Intalio, Inc. announced that its open-source BPEL process engine named
Orchestration Director Engine (ODE) has recently graduated from the
Apache incubator to a Top Level Project. ODE was contributed by
Intalio to the Apache Software Foundation in July 2006, following
Intalio's acquisition of FiveSight Technologies. This graduation
"marks an important milestone in the development of Intalio Server,
which is built on top of the ODE engine. ODE is the only open-source
BPEL engine currently available under a liberal open-source license
and supports all versions of the BPEL specification (1.0, 1.1, and 2.0).
Built upon this foundation, Intalio Server is the fastest and most
scalable process engine currently available on the market, capable of
supporting hundreds of thousands of different process models deployed
on the same server, and hundreds of millions of process instances
running concurrently on a single CPU. Apache ODE (Orchestration
Director Engine) executes business processes written following the
WS-BPEL standard. It talks to web services, sending and receiving
messages, handling data manipulation and error recovery as described
by your process definition. It supports both long and short living
process executions to orchestrate all the services that are part of
your application. WS-BPEL is an XML-based language defining several
constructs to write business processes. It defines a set of basic
control structures like conditions or loops as well as elements to
invoke web services and receive messages from services. It relies on
WSDL to express web services interfaces. Message structures can be
manipulated, assigning parts or the whole of them to variables that
can in turn be used to send other messages. The Apache Software
Foundation provides support for the Apache community of open-source
software projects. The Apache projects are characterized by a
collaborative, consensus based development process, an open and
pragmatic software license, and a desire to create high quality
software that leads the way in its field.
Orchestration Director Engine (ODE) has recently graduated from the
Apache incubator to a Top Level Project. ODE was contributed by
Intalio to the Apache Software Foundation in July 2006, following
Intalio's acquisition of FiveSight Technologies. This graduation
"marks an important milestone in the development of Intalio Server,
which is built on top of the ODE engine. ODE is the only open-source
BPEL engine currently available under a liberal open-source license
and supports all versions of the BPEL specification (1.0, 1.1, and 2.0).
Built upon this foundation, Intalio Server is the fastest and most
scalable process engine currently available on the market, capable of
supporting hundreds of thousands of different process models deployed
on the same server, and hundreds of millions of process instances
running concurrently on a single CPU. Apache ODE (Orchestration
Director Engine) executes business processes written following the
WS-BPEL standard. It talks to web services, sending and receiving
messages, handling data manipulation and error recovery as described
by your process definition. It supports both long and short living
process executions to orchestrate all the services that are part of
your application. WS-BPEL is an XML-based language defining several
constructs to write business processes. It defines a set of basic
control structures like conditions or loops as well as elements to
invoke web services and receive messages from services. It relies on
WSDL to express web services interfaces. Message structures can be
manipulated, assigning parts or the whole of them to variables that
can in turn be used to send other messages. The Apache Software
Foundation provides support for the Apache community of open-source
software projects. The Apache projects are characterized by a
collaborative, consensus based development process, an open and
pragmatic software license, and a desire to create high quality
software that leads the way in its field.
W3C First Public Working Draft: CSS Grid Positioning Module Level 3
W3C announced that members of the CSS Working Group have released the
First Public Working Draft for the "CSS Grid Positioning Module Level
3" specification. This Cascading Style Sheets (CSS) module describes
integration of grid-based layout similar to the grids traditionally
used in books and newspapers, with CSS sizing and positioning. This
design strategy complements the different approach defined in the CSS
Advanced Layout Module. Grids may be explicitly authored or implied
and combined with Media Queries. Grid systems have provided great
value to print designers for many years, and the same concepts may
be applied to online print content. Unlike print media however,
dimensions of online devices vary broadly; a single fixed-sized grid
that worked perfectly for print pages only works in a subset of web
scenarios. Adaptable solutions require dealing with a grid that adapts
to fit devices of varying form factors. This CSS module adds
capabilities for sizing and positioning in terms of a scalable grid.
The grid can be specified directly by author, or can be implied
from existing two-dimensional structures e.g., tables or multi-column
elements. Grid positioning addresses layout in continuous media and
in paged media. The "CSS Advanced Layout Module" specification defines
template-based positioning as an alternative to absolute positioning,
which, like absolute positioning, is especially useful for aligning
elements that don't have simple relationships in the source
(parent-child, ancestor-descendant, immediate sibling). But in
contrast to absolute positioning, the elements are not positioned
with the help of horizontal and vertical coordinates, but by mapping
them into slots in a table-like template. The relative size and
alignment of elements is thus governed implicitly by the rows and
columns of the template. It doesn't allow elements to overlap, but
it provides layouts that adapt better to different widths.
First Public Working Draft for the "CSS Grid Positioning Module Level
3" specification. This Cascading Style Sheets (CSS) module describes
integration of grid-based layout similar to the grids traditionally
used in books and newspapers, with CSS sizing and positioning. This
design strategy complements the different approach defined in the CSS
Advanced Layout Module. Grids may be explicitly authored or implied
and combined with Media Queries. Grid systems have provided great
value to print designers for many years, and the same concepts may
be applied to online print content. Unlike print media however,
dimensions of online devices vary broadly; a single fixed-sized grid
that worked perfectly for print pages only works in a subset of web
scenarios. Adaptable solutions require dealing with a grid that adapts
to fit devices of varying form factors. This CSS module adds
capabilities for sizing and positioning in terms of a scalable grid.
The grid can be specified directly by author, or can be implied
from existing two-dimensional structures e.g., tables or multi-column
elements. Grid positioning addresses layout in continuous media and
in paged media. The "CSS Advanced Layout Module" specification defines
template-based positioning as an alternative to absolute positioning,
which, like absolute positioning, is especially useful for aligning
elements that don't have simple relationships in the source
(parent-child, ancestor-descendant, immediate sibling). But in
contrast to absolute positioning, the elements are not positioned
with the help of horizontal and vertical coordinates, but by mapping
them into slots in a table-like template. The relative size and
alignment of elements is thus governed implicitly by the rows and
columns of the template. It doesn't allow elements to overlap, but
it provides layouts that adapt better to different widths.
Tuesday, September 11, 2007
Open XML Voted Down But Not Out
Microsoft's Office Open XML failed to get enough votes early this month
for approval as an international standard. However, if Microsoft
addresses technical concerns raised by members of the International
Organization for Standardization, the specification could still join
the OpenDocument Format next year as a certified ISO specification for
creating and viewing electronic documents. The deadline for the
five-month, fast-track voting process by 104 countries on whether to
adopt OOXML as an international standard was September 2, 2007. ISO
announced last week that the standard did not receive enough votes for
approval. A ballot resolution meeting to address concerns identified
in this round of balloting is expected to be held by ISO and the
International Electrotechnical Commission (IEC) in February 2008. "At
the moment, the ODF and OOXML have two different scopes," said Mike
Hogan, an electrical engineer at NIST who is involved in the standards
process. ODF, which had its origins with Sun Microsystems' Open Office
program, is more generic, and OOXML focuses on opening Microsoft
documents, he said... NIST favors competing document standards, NIST
Director William Jeffrey said. "NIST believes that ODF and OOXML can
coexist as international standards," Jeffrey said. "NIST fully supports
technology-neutral solutions and will support the standard once our
technical concerns are addressed." Hogan said NIST is seeing something
similar to this in U.S. government agencies: "We have CIOs in the
government say we might be buying products that purport to be able to
open documents using either standard. So we're likely to buy [products]
that can handle both standards." The first edition of OOXML is in play,
he said. "There are a lot of changes being requested, let's see how
many they can agree on" at the meeting in February. Aside from the
fast-track ballot receiving a lot of publicity, there's nothing new
in OOXML's journey to standardization, Hogan said. "As a proposed
standard works its way through the many cycles of an ISO committee,
you go through many ballots to get something right..." Further Information
for approval as an international standard. However, if Microsoft
addresses technical concerns raised by members of the International
Organization for Standardization, the specification could still join
the OpenDocument Format next year as a certified ISO specification for
creating and viewing electronic documents. The deadline for the
five-month, fast-track voting process by 104 countries on whether to
adopt OOXML as an international standard was September 2, 2007. ISO
announced last week that the standard did not receive enough votes for
approval. A ballot resolution meeting to address concerns identified
in this round of balloting is expected to be held by ISO and the
International Electrotechnical Commission (IEC) in February 2008. "At
the moment, the ODF and OOXML have two different scopes," said Mike
Hogan, an electrical engineer at NIST who is involved in the standards
process. ODF, which had its origins with Sun Microsystems' Open Office
program, is more generic, and OOXML focuses on opening Microsoft
documents, he said... NIST favors competing document standards, NIST
Director William Jeffrey said. "NIST believes that ODF and OOXML can
coexist as international standards," Jeffrey said. "NIST fully supports
technology-neutral solutions and will support the standard once our
technical concerns are addressed." Hogan said NIST is seeing something
similar to this in U.S. government agencies: "We have CIOs in the
government say we might be buying products that purport to be able to
open documents using either standard. So we're likely to buy [products]
that can handle both standards." The first edition of OOXML is in play,
he said. "There are a lot of changes being requested, let's see how
many they can agree on" at the meeting in February. Aside from the
fast-track ballot receiving a lot of publicity, there's nothing new
in OOXML's journey to standardization, Hogan said. "As a proposed
standard works its way through the many cycles of an ISO committee,
you go through many ballots to get something right..." Further Information
Building a Web Service Powered JSR 168 Financial Portlet
An increasing number of web applications built today use portal
technology. A portal is a web application that typically provides
services such as personalization, single sign-on, and content
aggregation from different sources. Commercially available Java Portal
Web Servers include BEA, IBM, and Oracle, but there are also many open
source Java Portal Web Servers such as Liferay, Pluto, Stringbeans,
and JBoss Portal. Most of these Java Portal Web Servers have tools
that allow you to build portlets. In the early days of portals, you
had to develop and maintain a separate version of your portlet that
complied with the vendor-specific portlet API for each and every
vendor portal. Maintaining separate vendor-specific versions was time
consuming, aggravating, and cumbersome, and limited the availability
of generic, cross-server portlets. Java Specifiction Request #168
(JSR 168) has solved this vendor-specific portlet configuration
problem. By adhering to the standards, you can now build portlets
that can run in portals irrespective of their vendors. Most Java
Portal Web Servers support the JSR 168 specification. This article
concentrates on the presentation and service layers of a Portal Web
Server. The presentation layer interacts with services to aggregate
data from different sources. These services are typically defined in
the service layer and are part of any Service-Oriented Architecture
(SOA) implemented in a portal. MORE INFO
technology. A portal is a web application that typically provides
services such as personalization, single sign-on, and content
aggregation from different sources. Commercially available Java Portal
Web Servers include BEA, IBM, and Oracle, but there are also many open
source Java Portal Web Servers such as Liferay, Pluto, Stringbeans,
and JBoss Portal. Most of these Java Portal Web Servers have tools
that allow you to build portlets. In the early days of portals, you
had to develop and maintain a separate version of your portlet that
complied with the vendor-specific portlet API for each and every
vendor portal. Maintaining separate vendor-specific versions was time
consuming, aggravating, and cumbersome, and limited the availability
of generic, cross-server portlets. Java Specifiction Request #168
(JSR 168) has solved this vendor-specific portlet configuration
problem. By adhering to the standards, you can now build portlets
that can run in portals irrespective of their vendors. Most Java
Portal Web Servers support the JSR 168 specification. This article
concentrates on the presentation and service layers of a Portal Web
Server. The presentation layer interacts with services to aggregate
data from different sources. These services are typically defined in
the service layer and are part of any Service-Oriented Architecture
(SOA) implemented in a portal. MORE INFO
Microsoft Rolls Out New Release of BizTalk Server
Microsoft Corp. today announced that the fifth major release of its
BizTalk Server is now generally available. BizTalk Server 2006 R2 is
designed to allow a company to extend its business processes to the
corporate "edge" to collect RFID data from a warehouse or link with
trading partners, Microsoft said. The company planned to unveil R2
in Taiwan to emphasize the focus on the supply chain in this release;
Microsoft views the Asia-Pacific region as "a primary hub for
manufacturing and the supply chain," said Burley Kawasaki, a director
in Microsoft's connected systems division. R2 includes native support
for RFID and EDI and also adds support for regulations like SWIFT, HL7,
HIPAA and RosettaNet that are aimed at supporting various vertical
industries such as health care and financial services. BizTalk RFID
is a robust and extensible set of capabilities with open APIs and
tools to cost effectively build vertical asset tracking / supply chain
visibility solutions and configure intelligent RFID-driven processes.
BizTalk RFID will include rich data, device and event management.
BizTalk Server 2006 R2 is also available in a new Branch Edition aimed
at supporting the connection of intraorganizational supply chain
processes. Microsoft today also released for testing BizTalk Server
Adapter Pack Beta 2, which is focused on helping companies integrate
business applications from SAP AG, Oracle Corp. and Siebel Systems.
The adapters work with BizTalk Server 2006 R2, SQL Server 2005 and
Microsoft Office SharePoint Server 2007, and are scheduled to be
available in the first half of next year. More Information
BizTalk Server is now generally available. BizTalk Server 2006 R2 is
designed to allow a company to extend its business processes to the
corporate "edge" to collect RFID data from a warehouse or link with
trading partners, Microsoft said. The company planned to unveil R2
in Taiwan to emphasize the focus on the supply chain in this release;
Microsoft views the Asia-Pacific region as "a primary hub for
manufacturing and the supply chain," said Burley Kawasaki, a director
in Microsoft's connected systems division. R2 includes native support
for RFID and EDI and also adds support for regulations like SWIFT, HL7,
HIPAA and RosettaNet that are aimed at supporting various vertical
industries such as health care and financial services. BizTalk RFID
is a robust and extensible set of capabilities with open APIs and
tools to cost effectively build vertical asset tracking / supply chain
visibility solutions and configure intelligent RFID-driven processes.
BizTalk RFID will include rich data, device and event management.
BizTalk Server 2006 R2 is also available in a new Branch Edition aimed
at supporting the connection of intraorganizational supply chain
processes. Microsoft today also released for testing BizTalk Server
Adapter Pack Beta 2, which is focused on helping companies integrate
business applications from SAP AG, Oracle Corp. and Siebel Systems.
The adapters work with BizTalk Server 2006 R2, SQL Server 2005 and
Microsoft Office SharePoint Server 2007, and are scheduled to be
available in the first half of next year. More Information
Does SOA Need MEST on Top of REST?
From the people who brought you Guerrilla SOA comes Message Exchange
State Transfer (MEST) to compete for service-oriented architecture
(SOA) developers' attention with Representation State Transfer (REST)
and good old SOAP. MEST is how Guerrilla SOA will get done, according
to Jim Webber, Ph.D., SOA practice lead for the ThoughtWorks Inc., the
leading proponent of the guerrilla approach. Meanwhile, Savas
Parastatidis, MSc., PhD. and technical computing architect at Microsoft,
has written a definition of MEST that also compares and contrasts it
with REST. Parastatidis sees REST as being primarily about resources
at the end of URLs where MEST would be the paradigm for the basic
message in a business applications, such as an invoice requiring an
action in a basic accounting system. "We would like to see MEST become
for service-orientation and Web Services what REST is for
resource-orientation and the Web," writes Parastatidis. In explaining
the basics of MEST, Parastatidis lists four key points: (1) MEST is
not an application protocol in the same way that REST is not one either;
(2) It is based on the transfer of a message and the processing of the
contents of that message in application-specific ways; (3) The behavior
of what happens with the contents of a message is defined through
protocols (description of complex message-exchange patterns); (4) MEST
attempts to describe service-oriented architectures in terms of
services and messages and a set of architectural principles. More Information
State Transfer (MEST) to compete for service-oriented architecture
(SOA) developers' attention with Representation State Transfer (REST)
and good old SOAP. MEST is how Guerrilla SOA will get done, according
to Jim Webber, Ph.D., SOA practice lead for the ThoughtWorks Inc., the
leading proponent of the guerrilla approach. Meanwhile, Savas
Parastatidis, MSc., PhD. and technical computing architect at Microsoft,
has written a definition of MEST that also compares and contrasts it
with REST. Parastatidis sees REST as being primarily about resources
at the end of URLs where MEST would be the paradigm for the basic
message in a business applications, such as an invoice requiring an
action in a basic accounting system. "We would like to see MEST become
for service-orientation and Web Services what REST is for
resource-orientation and the Web," writes Parastatidis. In explaining
the basics of MEST, Parastatidis lists four key points: (1) MEST is
not an application protocol in the same way that REST is not one either;
(2) It is based on the transfer of a message and the processing of the
contents of that message in application-specific ways; (3) The behavior
of what happens with the contents of a message is defined through
protocols (description of complex message-exchange patterns); (4) MEST
attempts to describe service-oriented architectures in terms of
services and messages and a set of architectural principles. More Information
Serena's Mashup Exchange for Business
Serena Software kicked off its Chicago developer conference on Monday
by making available a software-as-a-service mashup exchange that enables
its partners to build, buy and sell business mashups. Mashup Composer
is a Web 2.0 tool that enables users to visually design mashups that
automate business activities. Presented as one of the highlights at
the Serena xChange conference, the tool is designed to address projects
that individually are too small to warrant dedicated IT support. Mashups,
which have previously been the sole domain of specialist Web developers,
combine data from multiple sources to create an integrated Web
application. Also referred to as "custom applications," business mashups
are sometimes lauded as being capable of bringing gains in productivity
and creativity without burdening the IT department. More Information
by making available a software-as-a-service mashup exchange that enables
its partners to build, buy and sell business mashups. Mashup Composer
is a Web 2.0 tool that enables users to visually design mashups that
automate business activities. Presented as one of the highlights at
the Serena xChange conference, the tool is designed to address projects
that individually are too small to warrant dedicated IT support. Mashups,
which have previously been the sole domain of specialist Web developers,
combine data from multiple sources to create an integrated Web
application. Also referred to as "custom applications," business mashups
are sometimes lauded as being capable of bringing gains in productivity
and creativity without burdening the IT department. More Information
IBM Throws Weight Behind OpenOffice.org Project
After years of holding out, IBM has joined the OpenOffice.org
open-source community and will contribute code to the office suite that
serves as an alternative to Microsoft's Office software. IBM has been
using code from the project in its development of productivity
applications it included in Lotus 8, the latest version of its
collaboration suite, but until now had not been an official member of
the community, said Doug Heintzman, director of strategy for the Lotus
division at IBM. The company now will contribute its own code to the
project and be more visible about its work to integrate OpenOffice.org
into Lotus, he said. Heintzman acknowledged that the International
Organization for Standardization's (ISO's) recent vote to reject
Microsoft's Open XML file format as a technology standard was one
reason IBM decided to join the effort. OpenOffice.org uses ODF (Open
Document Format), a rival file format to Open XML that is already an
ISO technology standard. IBM is one of the companies pushing for the
use of ODF in companies and government organizations that are creating
mandates to only use technology based on open standards in their IT
architectures. "They are certainly related," he said of the ISO vote
and IBM's decision to join OpenOffice.org. "We think that it's now
time to make sure there is a public code base that implements this
spec so we can attract a critical mass to build these new value
propositions." Sun founded OpenOffice.org and offers its own commercial
implementation of the suite, called StarOffice. The company, a long-time
IBM competitor in the hardware and software markets, also has been
the primary contributor to the code, one of the reasons IBM balked
for so long before joining the group. CLICK HERE
open-source community and will contribute code to the office suite that
serves as an alternative to Microsoft's Office software. IBM has been
using code from the project in its development of productivity
applications it included in Lotus 8, the latest version of its
collaboration suite, but until now had not been an official member of
the community, said Doug Heintzman, director of strategy for the Lotus
division at IBM. The company now will contribute its own code to the
project and be more visible about its work to integrate OpenOffice.org
into Lotus, he said. Heintzman acknowledged that the International
Organization for Standardization's (ISO's) recent vote to reject
Microsoft's Open XML file format as a technology standard was one
reason IBM decided to join the effort. OpenOffice.org uses ODF (Open
Document Format), a rival file format to Open XML that is already an
ISO technology standard. IBM is one of the companies pushing for the
use of ODF in companies and government organizations that are creating
mandates to only use technology based on open standards in their IT
architectures. "They are certainly related," he said of the ISO vote
and IBM's decision to join OpenOffice.org. "We think that it's now
time to make sure there is a public code base that implements this
spec so we can attract a critical mass to build these new value
propositions." Sun founded OpenOffice.org and offers its own commercial
implementation of the suite, called StarOffice. The company, a long-time
IBM competitor in the hardware and software markets, also has been
the primary contributor to the code, one of the reasons IBM balked
for so long before joining the group. CLICK HERE
Federating Configuration Management Databases (CMDBs)
The CMDB Federation work is a collaboration that involves BMC, CA,
Fujitsu, HP, IBM, and Microsoft. The CMDB Federation Workgroup recently
announced the publication of an industry-wide draft specification for
sharing information between Configuration Management Databases (CMDBs)
and other management data repositories (MDRs), such as asset management
systems and service desks. The specification, which the group plans to
submit as a standard, is intended to enable organizations to federate
and access information from complex, multi-vendor IT infrastructures.
The draft specification defines query and registration web services for
interaction between a federating CMDB and an MDR, based on HTTP, SOAP,
WSDL, XML Schema, and Web Services Interoperability (WS-I) standards.
A federating CMDB can access data from a participating MDR using the
query service defined in the specification and implemented by the MDR.
A client of a federating CMDB can also use the query service to extract
data from another federating CMDB, making it possible for a CMDB to
hierarchically federate with other federating CMDBs. An MDR can also
export data to a CMDB that has implemented a registration service. The
federated CMDB is a "collection of services and data repositories that
contain configuration and other data records about resources. The term
'resource' includes configuration items (e.g., a computer system, an
application, or a router), process artifacts (e.g., an incident record,
a change record), and relationships between configuration item(s) and/or
process artifact(s). The architecture describes a logical model and does
not necessarily reflect a physical manifestation. CMDBs give IT
organizations complete visibility into the attributes, relationships,
and dependencies of the components in their enterprise computing
environments. An industry standard for federating and accessing IT
information will integrate communication between IT management tools.
With a standard way for vendors and tools to share and access
configuration data, organizations can use their CMDBs to create a more
complete and accurate view of IT information spread out across multiple
data sources. This makes it easier to keep track of changes to an IT
environment, such as the last time an application was updated or changes
to critical configuration information. It also helps organizations
better understand the impact of changes they make to the IT environment. CLICK HERE
Fujitsu, HP, IBM, and Microsoft. The CMDB Federation Workgroup recently
announced the publication of an industry-wide draft specification for
sharing information between Configuration Management Databases (CMDBs)
and other management data repositories (MDRs), such as asset management
systems and service desks. The specification, which the group plans to
submit as a standard, is intended to enable organizations to federate
and access information from complex, multi-vendor IT infrastructures.
The draft specification defines query and registration web services for
interaction between a federating CMDB and an MDR, based on HTTP, SOAP,
WSDL, XML Schema, and Web Services Interoperability (WS-I) standards.
A federating CMDB can access data from a participating MDR using the
query service defined in the specification and implemented by the MDR.
A client of a federating CMDB can also use the query service to extract
data from another federating CMDB, making it possible for a CMDB to
hierarchically federate with other federating CMDBs. An MDR can also
export data to a CMDB that has implemented a registration service. The
federated CMDB is a "collection of services and data repositories that
contain configuration and other data records about resources. The term
'resource' includes configuration items (e.g., a computer system, an
application, or a router), process artifacts (e.g., an incident record,
a change record), and relationships between configuration item(s) and/or
process artifact(s). The architecture describes a logical model and does
not necessarily reflect a physical manifestation. CMDBs give IT
organizations complete visibility into the attributes, relationships,
and dependencies of the components in their enterprise computing
environments. An industry standard for federating and accessing IT
information will integrate communication between IT management tools.
With a standard way for vendors and tools to share and access
configuration data, organizations can use their CMDBs to create a more
complete and accurate view of IT information spread out across multiple
data sources. This makes it easier to keep track of changes to an IT
environment, such as the last time an application was updated or changes
to critical configuration information. It also helps organizations
better understand the impact of changes they make to the IT environment. CLICK HERE
Friday, September 7, 2007
Snom Contest Seeks XML Innovation
snom technology AG on has announced the launch of its XML contest which
calls on the VoIP programmer community and snom partners to develop
XML-Minibrowser applications for the snom 3xx series of phones. The
snom 3xx series of phones, which consists of the snom 300, 320, 360
and 370, has a permanent XML-Minibrowser. The XML contest will focus
on data screens that will work on several snom 3xx series phones.
Contestants can submit entries in two categories: Business Application
and Lifestyle Application. Several factors will be considered when
choosing the winners, including the number of snom phone models for
which the application was programmed, the size of the application and
various technical requirements. According to the Wiki description from
Hirosh Dabui: "The snoms are able to use services from standard web
servers. You can use snoms, to deploy customized client services with
which users can interact with the keypad. The snoms will use the
HTTP/HTTPS protocol from standard web servers, like Apache. Typical
services are: To-do lists, Stock Information, Weather, Provisioning,
Daily schedule, and Telephone directory. To create interactive services
is relatively easy when you understand the XML objects that will
supported since firmware v7.1.7 by snom 370, snom 360, snom 320 and
snom 300. Snoms can use HTTP to load a XML page or can receive a
SIP-Notify message. IPPhone XML library for PHP provides a set of PHP
classes that allows rapid XML application development for the XML
browsers implemented in both Cisco 79xx and snom phones; a T9-style
phonebook application with flat file and MySQL backends is included."
calls on the VoIP programmer community and snom partners to develop
XML-Minibrowser applications for the snom 3xx series of phones. The
snom 3xx series of phones, which consists of the snom 300, 320, 360
and 370, has a permanent XML-Minibrowser. The XML contest will focus
on data screens that will work on several snom 3xx series phones.
Contestants can submit entries in two categories: Business Application
and Lifestyle Application. Several factors will be considered when
choosing the winners, including the number of snom phone models for
which the application was programmed, the size of the application and
various technical requirements. According to the Wiki description from
Hirosh Dabui: "The snoms are able to use services from standard web
servers. You can use snoms, to deploy customized client services with
which users can interact with the keypad. The snoms will use the
HTTP/HTTPS protocol from standard web servers, like Apache. Typical
services are: To-do lists, Stock Information, Weather, Provisioning,
Daily schedule, and Telephone directory. To create interactive services
is relatively easy when you understand the XML objects that will
supported since firmware v7.1.7 by snom 370, snom 360, snom 320 and
snom 300. Snoms can use HTTP to load a XML page or can receive a
SIP-Notify message. IPPhone XML library for PHP provides a set of PHP
classes that allows rapid XML application development for the XML
browsers implemented in both Cisco 79xx and snom phones; a T9-style
phonebook application with flat file and MySQL backends is included."
Komodo Spawns New Open Source IDE Project
Development tools vendor ActiveState is opening up parts of its Komodo
IDE in a new effort called Open Komodo. Komodo is a Mozilla
Framework-based application that uses Mozilla's XUL (XML-based User
Interface Language), which is Mozilla's language for creating its user
interface. With many IDEs already out in a crowded marketplace for
development tools, Open Komodo's use of Mozilla's XUL may well be its
key differentiators. The Open Komodo effort will take code from
ActiveState's freely available, but not open source, Komodo Edit product
and use it as a base for the new open source IDE. The aim is to create
a community and a project that will help Web developers to more easily
create modern Web-based applications. In February 2007, ActiveState
released a free version of its flagship Komodo IDE called Komodo Edit,
and that release was a prelude to going open source. Open Komodo is
only a subset of Edit, though.The longer-term project is something
called Komodo Snapdragon. The intention of Snapdragon is to provide
a top-quality IDE for Web development that focuses on open
technologies, such as AJAX, HTML/XML, JavaScript and more. Shane
Caraveo, Komodo Dev Lead: " We want to provide tight integration into
other Firefox-based development tools as well. This would target
Web 2.0 applications, and next-generation Rich Internet Applications.
A XUL-based application uses all the same technologies that you would
use to develop an advanced Web site today; this includes XML, CSS and
JavaScript. This type of platform allows people who can develop Web
site to develop applications. So, I would say that this is an IDE that
Web developers can easily modify, hack, build, extend, without having
to learn new languages and technologies."
IDE in a new effort called Open Komodo. Komodo is a Mozilla
Framework-based application that uses Mozilla's XUL (XML-based User
Interface Language), which is Mozilla's language for creating its user
interface. With many IDEs already out in a crowded marketplace for
development tools, Open Komodo's use of Mozilla's XUL may well be its
key differentiators. The Open Komodo effort will take code from
ActiveState's freely available, but not open source, Komodo Edit product
and use it as a base for the new open source IDE. The aim is to create
a community and a project that will help Web developers to more easily
create modern Web-based applications. In February 2007, ActiveState
released a free version of its flagship Komodo IDE called Komodo Edit,
and that release was a prelude to going open source. Open Komodo is
only a subset of Edit, though.The longer-term project is something
called Komodo Snapdragon. The intention of Snapdragon is to provide
a top-quality IDE for Web development that focuses on open
technologies, such as AJAX, HTML/XML, JavaScript and more. Shane
Caraveo, Komodo Dev Lead: " We want to provide tight integration into
other Firefox-based development tools as well. This would target
Web 2.0 applications, and next-generation Rich Internet Applications.
A XUL-based application uses all the same technologies that you would
use to develop an advanced Web site today; this includes XML, CSS and
JavaScript. This type of platform allows people who can develop Web
site to develop applications. So, I would say that this is an IDE that
Web developers can easily modify, hack, build, extend, without having
to learn new languages and technologies."
Ajax Startup Launches Web Desktop Linked to Gmail
Linspire Chairman Michael Robertson's latest venture, Ajax13, unveils
ajaxWindows, a Web-based middleware platform that stores all desktop
data in a user's Gmail account. The company is Ajax13, the product is
ajaxWindows, and the concept is pretty straightforward: The software
platform is operating system-agnostic and based on the XML User
Interface Language (XUL) to act as a Web-based desktop. Files can be
moved around and opened, and applications launch with a mouse click.
The interface also includes customizable wallpaper, start-up and shut
down sounds, and browser bookmarks. But instead of interacting with
the hardware, the user stores all desktop data, documents, and content,
free of charge into a Gmail account. So far, Robertson has managed
to collect a fair amount of applications including an Instant Messaging
client, a VoIP telephone client based on the Gizmo Project, and even
Robertson own MP3 lockers and AnywhereCD application. The ajaxWindows
software is compatible with Internet Explorer and Firefox browsers.
Using IE requires a small plug-in to work with Microsoft (MSFT)'s
ActiveX features and get the XUL engine up to speed. Who is Roberts
targeting with ajaxWindows? On the consumer side, Google Pack and
Microsoft's Windows Live come to mind. But if companies rally around
ajaxWindows' APIs, the virtual desktop could be used in call centers,
workstations and anywhere other SaaS companies like Salesforce.com
are thriving.
ajaxWindows, a Web-based middleware platform that stores all desktop
data in a user's Gmail account. The company is Ajax13, the product is
ajaxWindows, and the concept is pretty straightforward: The software
platform is operating system-agnostic and based on the XML User
Interface Language (XUL) to act as a Web-based desktop. Files can be
moved around and opened, and applications launch with a mouse click.
The interface also includes customizable wallpaper, start-up and shut
down sounds, and browser bookmarks. But instead of interacting with
the hardware, the user stores all desktop data, documents, and content,
free of charge into a Gmail account. So far, Robertson has managed
to collect a fair amount of applications including an Instant Messaging
client, a VoIP telephone client based on the Gizmo Project, and even
Robertson own MP3 lockers and AnywhereCD application. The ajaxWindows
software is compatible with Internet Explorer and Firefox browsers.
Using IE requires a small plug-in to work with Microsoft (MSFT)'s
ActiveX features and get the XUL engine up to speed. Who is Roberts
targeting with ajaxWindows? On the consumer side, Google Pack and
Microsoft's Windows Live come to mind. But if companies rally around
ajaxWindows' APIs, the virtual desktop could be used in call centers,
workstations and anywhere other SaaS companies like Salesforce.com
are thriving.
W3C OWL Group to Refine and Extend Web Ontology Language
W3C has announced the launch of a new OWL Working Group, described in
a Charter effective September 6, 2007. Ian Horrocks (Oxford University)
and Alan Ruttenberg (ScienceCommons) chair the Working Group. The OWL
Web Ontology Language is playing an important role in an increasing
number and range of applications, and is the focus of research into
tools, reasoning techniques, formal foundations and language
extensions. The widespread use of OWL has revealed requirements for
language extensions that are needed in applications. At the same time,
research and development into reasoning techniques and practical
algorithms has made it possible to provide tool support for language
features that would not have been feasible at the time OWL was
published. The new OWL Working Group is chartered through July 2009
to produce a W3C Recommendation for an extended Web Ontology Language
(OWL), adding a small set of extensions, and defining profiles
identified by users and tool implementers. The extensions, referred
to as OWL 1.1, fall into the following categories: (1) Extensions to
the logic underlying OWL, adding new constructs that extend the
expressivity of OWL (e.g., qualified cardinality restrictions and
property chain inclusion axioms). (2) Extensions to the datatype
support provided by OWL, e.g., with XML Schema Datatype semantics
and datatype facets. (3) Additional syntactic facilities that do not
extend the expressive power of OWL but that make some common modelling
paradigms easier to express (e.g., disjoint unions). The Working
Group will also define a set of language fragments (profiles, or
subsets of the language) that have been identified as having
interesting or useful properties (e.g., being easier to implement).
Other deliverables may include an XML Exchange syntax for OWL 1.1,
with GRDDL enabled namespace document -- to be decided by the group
whether this document should go through the W3C Recommendation track
or would be published as a W3C Note. The WG may produce additional
outreach material aimed at easing the adoption of OWL 1.1 features
by OWL users and other members of the Semantic Web community.
a Charter effective September 6, 2007. Ian Horrocks (Oxford University)
and Alan Ruttenberg (ScienceCommons) chair the Working Group. The OWL
Web Ontology Language is playing an important role in an increasing
number and range of applications, and is the focus of research into
tools, reasoning techniques, formal foundations and language
extensions. The widespread use of OWL has revealed requirements for
language extensions that are needed in applications. At the same time,
research and development into reasoning techniques and practical
algorithms has made it possible to provide tool support for language
features that would not have been feasible at the time OWL was
published. The new OWL Working Group is chartered through July 2009
to produce a W3C Recommendation for an extended Web Ontology Language
(OWL), adding a small set of extensions, and defining profiles
identified by users and tool implementers. The extensions, referred
to as OWL 1.1, fall into the following categories: (1) Extensions to
the logic underlying OWL, adding new constructs that extend the
expressivity of OWL (e.g., qualified cardinality restrictions and
property chain inclusion axioms). (2) Extensions to the datatype
support provided by OWL, e.g., with XML Schema Datatype semantics
and datatype facets. (3) Additional syntactic facilities that do not
extend the expressive power of OWL but that make some common modelling
paradigms easier to express (e.g., disjoint unions). The Working
Group will also define a set of language fragments (profiles, or
subsets of the language) that have been identified as having
interesting or useful properties (e.g., being easier to implement).
Other deliverables may include an XML Exchange syntax for OWL 1.1,
with GRDDL enabled namespace document -- to be decided by the group
whether this document should go through the W3C Recommendation track
or would be published as a W3C Note. The WG may produce additional
outreach material aimed at easing the adoption of OWL 1.1 features
by OWL users and other members of the Semantic Web community.
Thursday, September 6, 2007
Save Time and Code with XPath 2.0 and XSLT 2.0
his article demonstrates many of the new features of XPath 2.0 and XSLT
2.0. Three interesting new features in XPath 2.0 and XSLT 2.0 are the
'item' data type, the 'to' operator, and the concept of sequences. Here
we build a sample application that uses these features to generate a
sophisticated HTML view of an XML document, and with the new features
in XSLT 2.0, create shorter stylesheets that are easier to maintain.
Along the way, we spend a bit of time on data typing in XSLT 2.0 and
learn to use the new 'xsl:function' element. In this sample application,
we take an unwieldy stylesheet and refactor it into a much smaller and
more maintainable piece of code. One of the major new concepts in XPath
2.0 and XSLT 2.0 is that everything is a sequence. In XPath 1.0 and XSLT
1.0, you typically worked with trees of nodes. The parsed XML document
was a tree that contained the document node and its descendants. Using
that tree of nodes, you could find the node for the root element, along
with all of the root element's descendants, attributes, and siblings.
Any comments or processing instructions outside the root element of the
XML file are considered siblings of the root element. When you work with
an XML document in XPath 2.0 and XSLT 2.0, you use the sequence in the
same way as the tree structure in XPath 1.0 and XSLT 1.0. The sequence
contains a single item (the document node), and you use it the same way
you always have. However, you can create sequences of atomic values.
In sample application for this article, you manage the data for a 16-team
single-elimination tournament. For Further Information CLICK HERE
2.0. Three interesting new features in XPath 2.0 and XSLT 2.0 are the
'item' data type, the 'to' operator, and the concept of sequences. Here
we build a sample application that uses these features to generate a
sophisticated HTML view of an XML document, and with the new features
in XSLT 2.0, create shorter stylesheets that are easier to maintain.
Along the way, we spend a bit of time on data typing in XSLT 2.0 and
learn to use the new 'xsl:function' element. In this sample application,
we take an unwieldy stylesheet and refactor it into a much smaller and
more maintainable piece of code. One of the major new concepts in XPath
2.0 and XSLT 2.0 is that everything is a sequence. In XPath 1.0 and XSLT
1.0, you typically worked with trees of nodes. The parsed XML document
was a tree that contained the document node and its descendants. Using
that tree of nodes, you could find the node for the root element, along
with all of the root element's descendants, attributes, and siblings.
Any comments or processing instructions outside the root element of the
XML file are considered siblings of the root element. When you work with
an XML document in XPath 2.0 and XSLT 2.0, you use the sequence in the
same way as the tree structure in XPath 1.0 and XSLT 1.0. The sequence
contains a single item (the document node), and you use it the same way
you always have. However, you can create sequences of atomic values.
In sample application for this article, you manage the data for a 16-team
single-elimination tournament. For Further Information CLICK HERE
Speech Synthesis Markup Language (SSML) Version 1.1
W3C announced that members of the Voice Browser Working Group have
released an updated Working Draft for the "Speech Synthesis Markup
Language (SSML) Version 1.1" specification. Changes from the previous
draft include the usage of XML 1.1 and IRIs, and the specification of
voice selection and language speaking control. The W3C Voice Browser
Working Group has sought to develop standards to enable access to the
Web using spoken interaction. The Speech Synthesis Markup Language
Specification is one of these standards and is designed to provide a
rich, XML-based markup language for assisting the generation of
synthetic speech in Web and other applications. The essential role of
the markup language is to provide authors of synthesizable content a
standard way to control aspects of speech such as pronunciation, volume,
pitch, rate, etc. across different synthesis-capable platforms. The
intended use of SSML is to improve the quality of synthesized content.
Different markup elements impact different stages of the synthesis
process. The markup may be produced either automatically, for instance
via XSLT or CSS3 from an XHTML document, or by human authoring. Markup
may be present within a complete SSML document or as part of a fragment
embedded in another language, although no interactions with other
languages are specified as part of SSML itself. Most of the markup
included in SSML is suitable for use by the majority of content
developers; however, some advanced features like 'phoneme' and 'prosody'
(e.g., for speech contour design) may require specialized knowledge.
SSML Version 1.1 improves on W3C's SSML 1.0 Recommendation by adding
support for more conventions and practices of the world's natural
(human) languages.
released an updated Working Draft for the "Speech Synthesis Markup
Language (SSML) Version 1.1" specification. Changes from the previous
draft include the usage of XML 1.1 and IRIs, and the specification of
voice selection and language speaking control. The W3C Voice Browser
Working Group has sought to develop standards to enable access to the
Web using spoken interaction. The Speech Synthesis Markup Language
Specification is one of these standards and is designed to provide a
rich, XML-based markup language for assisting the generation of
synthetic speech in Web and other applications. The essential role of
the markup language is to provide authors of synthesizable content a
standard way to control aspects of speech such as pronunciation, volume,
pitch, rate, etc. across different synthesis-capable platforms. The
intended use of SSML is to improve the quality of synthesized content.
Different markup elements impact different stages of the synthesis
process. The markup may be produced either automatically, for instance
via XSLT or CSS3 from an XHTML document, or by human authoring. Markup
may be present within a complete SSML document or as part of a fragment
embedded in another language, although no interactions with other
languages are specified as part of SSML itself. Most of the markup
included in SSML is suitable for use by the majority of content
developers; however, some advanced features like 'phoneme' and 'prosody'
(e.g., for speech contour design) may require specialized knowledge.
SSML Version 1.1 improves on W3C's SSML 1.0 Recommendation by adding
support for more conventions and practices of the world's natural
(human) languages.
XML Daily Newslink. Wednesday, 05 September 2007
A first draft of a document providing UN/CEFACT - UBL NDR comparison
and analysis has been posted to the document repository of the OASIS
Universal Business Language (UBL) Technical Committee by Michael
Grimley. Specifically, it provides a comparison and analysis of
Version 2.0 of the UN/CEFACT and UBL NDRs. Version 2.0 of the UN/CEFACT
XML Naming and Design Rules technical specification. "allows users to
identify, capture and maximize the re-use of business information
expressed as XML schema components. It ensures consistent and efficient
use of XML in a business-to-business and application-to-application
environment. It can be utilized wherever business information is being
shared or exchanged among and between enterprises and government
agencies worldwide using XML schema." Mark Crawford (SAP Standards
Architect) wrote in a posting to the XML Developers List (26-April-2007):
"The CCTS standards stack consists of CCTS. It requires the use of
no other specification -- UN/CEFACT or other -- for implementation.
The UN/CEFACT CCTS standards stack however does have a defined XML NDR
as part of the stack - for use by UN/CEFACT. Further CCTS does not
require any syntax specific set of rules. In fact its power is in its
syntax neutrality and its context mechanisms... there are [several NDRs]
and I am not sure I would include OAGi 9 at this point. The US Department
of the Navy for example has their own CCTS NDR. What is interesting
is that many SDOs have realized -- driven in large part by their
membership -- that it makes no sense to have different flavors of NDRs
and that yes there is real value in being able to auto generate schema
from the business models. OAGi, GS1, CIDX, ACORD, UN/CEFACT, UBL,
RosettaNet, AIAG, and others have come together to work collaboratively
on the next set of UN/CEFACT XML NDRs that will serve as a convergence
point for all of these organizations."
and analysis has been posted to the document repository of the OASIS
Universal Business Language (UBL) Technical Committee by Michael
Grimley. Specifically, it provides a comparison and analysis of
Version 2.0 of the UN/CEFACT and UBL NDRs. Version 2.0 of the UN/CEFACT
XML Naming and Design Rules technical specification. "allows users to
identify, capture and maximize the re-use of business information
expressed as XML schema components. It ensures consistent and efficient
use of XML in a business-to-business and application-to-application
environment. It can be utilized wherever business information is being
shared or exchanged among and between enterprises and government
agencies worldwide using XML schema." Mark Crawford (SAP Standards
Architect) wrote in a posting to the XML Developers List (26-April-2007):
"The CCTS standards stack consists of CCTS. It requires the use of
no other specification -- UN/CEFACT or other -- for implementation.
The UN/CEFACT CCTS standards stack however does have a defined XML NDR
as part of the stack - for use by UN/CEFACT. Further CCTS does not
require any syntax specific set of rules. In fact its power is in its
syntax neutrality and its context mechanisms... there are [several NDRs]
and I am not sure I would include OAGi 9 at this point. The US Department
of the Navy for example has their own CCTS NDR. What is interesting
is that many SDOs have realized -- driven in large part by their
membership -- that it makes no sense to have different flavors of NDRs
and that yes there is real value in being able to auto generate schema
from the business models. OAGi, GS1, CIDX, ACORD, UN/CEFACT, UBL,
RosettaNet, AIAG, and others have come together to work collaboratively
on the next set of UN/CEFACT XML NDRs that will serve as a convergence
point for all of these organizations."
Citrix Buys XML Security Firm QuickTree
Citrix Systems has bought QuickTree, a software developer focused on
improving security and performance for XML and Web services. QuickTree
software can be added to network gear to guard against attacks carried
within XML traffic. The company's XML Security Module (XSM) processes
and inspects XML traffic as it passes through these devices, such as
firewalls, load balancers and SSL VPN gateways. Competitors include
Layer 7, Cisco with its purchase of Reactivity and Forum Systems.
Citrix argues that while stand-alone XML appliances are one way to
boost XML processing speeds and secure XML traffic, it is more efficient
to integrate these features in existing network infrastructure; such
equipment would reduce the number of devices in the network and
ultimately scale better than appliances. Citrix says it will integrate
XSM into its AppExpert Policy Builder platform, a GUI tool for creating
policies for applications and users that are translated into rules that
are enforced by Citrix network infrastructure. The company says XSM
will be integrated first with NetScaler, which typically sits between
Web servers and the larger private or public IP network.
improving security and performance for XML and Web services. QuickTree
software can be added to network gear to guard against attacks carried
within XML traffic. The company's XML Security Module (XSM) processes
and inspects XML traffic as it passes through these devices, such as
firewalls, load balancers and SSL VPN gateways. Competitors include
Layer 7, Cisco with its purchase of Reactivity and Forum Systems.
Citrix argues that while stand-alone XML appliances are one way to
boost XML processing speeds and secure XML traffic, it is more efficient
to integrate these features in existing network infrastructure; such
equipment would reduce the number of devices in the network and
ultimately scale better than appliances. Citrix says it will integrate
XSM into its AppExpert Policy Builder platform, a GUI tool for creating
policies for applications and users that are translated into rules that
are enforced by Citrix network infrastructure. The company says XSM
will be integrated first with NetScaler, which typically sits between
Web servers and the larger private or public IP network.
Parsing Microformats
Microformats are a way to embed specific semantic data into the HTML
that we use today. One of the first questions an XML guru might ask
is "Why use HTML when XML lets you create the same semantics?" [but]
I won't go into all the reasons XML might be a better or worse choice
for encoding data or why microformats have chosen to use HTML as their
encoding base. This article will focus more on how to extract
microformats data from the HTML, how the basic parsing rules work, and
how they differ from XML... One of the more popular and well-established
microformats is hCard. This is a vCard representation in HTML, hence
the "h" in hCard, HTML vCard. A vCard contains basic information about
a person or an organization. This format is used extensively in address
book applications as a way to backup and interchange contact information.
By Internet standards it's an old format, the specification is RFC 2426
from 1998. It is pre-XML, so the syntax is just simple text with a few
delimiters and start and end elements... A vCard file has a 'BEGIN:VCARD'
and an 'END:VCARD' that act as a container so the parser knows when to
stop looking for more data. There might be multiple vCards in one file,
so this nicely groups the data into distinct vCards. The 'FN' stands
for Formatted Name, which is used as the display name. The 'N' is the
structured name, which encodes things like first, last, middle names,
prefixes and suffixes, all semicolon separated. Finally, 'URL' is the
URL of the web site associated with this contact... If we were to encode
this in XML it would probably look something like [XML code]... Let's
see how we can mark up the same vCard data in HTML using microformats,
which make extensive use of the 'rel', 'rev', and 'class' attributes to
help encode the semantics. The class attribute is used in much the same
way as elements are used in XML. So the previous XML example might be
marked up in HTML as [HTML code]... Let's take that HTML example and try
to parse it using XSLT. Microformats are designed to work with HTML 4
and higher; TIDY or a function like HTMLlib or loadHTML, either will
load the HTML document and convert it into a usable state for XSLT...
The parsing of microformats data is dependent the type of data and on
the HTML element it was encoded on. This is a very basic overview of
parsing data from a microformat. There are more rules depending on the
type of vCard property and on which HTML element it is encoded...
[Note: see the preceding citation for vCard and (IETF) cardDAV.]
that we use today. One of the first questions an XML guru might ask
is "Why use HTML when XML lets you create the same semantics?" [but]
I won't go into all the reasons XML might be a better or worse choice
for encoding data or why microformats have chosen to use HTML as their
encoding base. This article will focus more on how to extract
microformats data from the HTML, how the basic parsing rules work, and
how they differ from XML... One of the more popular and well-established
microformats is hCard. This is a vCard representation in HTML, hence
the "h" in hCard, HTML vCard. A vCard contains basic information about
a person or an organization. This format is used extensively in address
book applications as a way to backup and interchange contact information.
By Internet standards it's an old format, the specification is RFC 2426
from 1998. It is pre-XML, so the syntax is just simple text with a few
delimiters and start and end elements... A vCard file has a 'BEGIN:VCARD'
and an 'END:VCARD' that act as a container so the parser knows when to
stop looking for more data. There might be multiple vCards in one file,
so this nicely groups the data into distinct vCards. The 'FN' stands
for Formatted Name, which is used as the display name. The 'N' is the
structured name, which encodes things like first, last, middle names,
prefixes and suffixes, all semicolon separated. Finally, 'URL' is the
URL of the web site associated with this contact... If we were to encode
this in XML it would probably look something like [XML code]... Let's
see how we can mark up the same vCard data in HTML using microformats,
which make extensive use of the 'rel', 'rev', and 'class' attributes to
help encode the semantics. The class attribute is used in much the same
way as elements are used in XML. So the previous XML example might be
marked up in HTML as [HTML code]... Let's take that HTML example and try
to parse it using XSLT. Microformats are designed to work with HTML 4
and higher; TIDY or a function like HTMLlib or loadHTML, either will
load the HTML document and convert it into a usable state for XSLT...
The parsing of microformats data is dependent the type of data and on
the HTML element it was encoded on. This is a very basic overview of
parsing data from a microformat. There are more rules depending on the
type of vCard property and on which HTML element it is encoded...
[Note: see the preceding citation for vCard and (IETF) cardDAV.]
Subscribe to:
Posts (Atom)