A Beta release of "Content Model-driven Fedora 3.0" has been announced
by the Fedora Commons Project developers. The CMA is described as a
powerful, new integrated structure for persisting and delivering the
essential characteristics of digital objects in Fedora while simplifying
its use. Fedora Commons is the home of the unique Fedora open source
software, a robust integrated repository-centered platform that enables
the storage, access and management of virtually any kind of digital
content. Prior implementations of the Fedora Repository utilized a set
of specialized digital objects as a functional and persistence framework.
All of these objects conform to the same basic object model. Digital
objects in CMA are conceptually similar in prior versions of Fedora
though some important implementation details have changed. Fedora still
implements a compound digital object design consisting of an XML
encapsulation (now FOXML 1.1) and a set of bitstreams identified by
the "Datastream" XML element. We can also assemble multi-object groups
of related digital objects as before using semantic technologies. In
the CMA, the "content model" is defined as a formal model that describes
the characteristics of one or more digital objects. A digital object
may be said to conform to a content model. In the CMA, the concept of
the content model is comprehensive, including all possible
characteristics which are needed to enable persistence and delivery
of the content. This can include structural, behavioral and semantic
information. It can also include a description of the permitted,
excluded, and required relationships to other digital objects or
identifiable entities. "Following the rules of Fedora identifiers, the
identifier of the CModel object can be encoded within a URI. We will
describe the rationale for this decision in a later section but this
approach provides two immediate benefits: (1) it provides a scheme
which works within the Fedora architecture with minimal impact, and (2)
it is compatible with the Web architecture, RDF and OWL. We can even
build functionality using just the knowledge of the identifier without
creating a content model. Having a uniform method for identifying a
digital object's class maximizes interoperability..."
Search This Blog
Friday, December 28, 2007
W3C Last Call Working Drafts for SVG Print 1.2 (Language, Primer)
W3C announced that the SVG Working Group has published Last Call Working
Drafts for the "SVG Print 1.2, Part 2: Language" and "SVG Print 1.2,
Part 1: Primer" specifications. The "Language" document defines features
of the Scalable Vector Graphics (SVG) Language that are specifically for
printing environments. The "Primer" explains the technical background
and gives guidelines on how to use the SVG Print specification with SVG
1.2 Tiny and SVG 1.2 Full modules for printing; it is purely informative
and has no conformance statements. Because of its scalable, geometric
nature, SVG is inherently better suited to print than raster image formats.
The same geometry can be displayed on screen and on a printer, with
identical layout in both but taking advantage of the higher resolution
of print media. The same colors can be output, using an ICC-based color
managed workflow on the printer and an sRGB fallback approximation on
screen. This has been true since SVG 1.0, and so SVG has been used in
print workflows (for example, in combination with XSL FO) as well as on
screen. However, SVG also has dynamic, interactive features such as
declarative animation, scripting, timed elements like audio and video,
and user interaction such as event flow and link activation. None of these
are applicable to a print context. SVG 1.1 gives static and dynamic
conformance classes, but further guidance on what exactly SVG Printers
should do with such general content is helpful. The SVG Print
specification defines processing rules for handling such general purpose
content which was not designed to be printed, but which may be
encountered anyhow. It is possible to generate SVG which is exclusively
intended for print (for example, a printer which natively understands SVG).
This content might be created in an illustration program, or it might
be an output from a layout program, such as an XSL-FO renderer; or it
might be generated by an SVG Print driver. W3C's Graphics Activity has
been developing graphics specifications for over ten years: "Scalable
Vector Graphics (SVG), the current effort of the Activity, brings the
powerful combination of interactive, animated two-dimensional vector
graphics and Extensible Markup Language (XML). WebCGM 2.0 is used mainly
in industrial and defence technical documents. Earlier work was concerned
with Portable Network Graphics (PNG) and with WebCGM 1.0."
Drafts for the "SVG Print 1.2, Part 2: Language" and "SVG Print 1.2,
Part 1: Primer" specifications. The "Language" document defines features
of the Scalable Vector Graphics (SVG) Language that are specifically for
printing environments. The "Primer" explains the technical background
and gives guidelines on how to use the SVG Print specification with SVG
1.2 Tiny and SVG 1.2 Full modules for printing; it is purely informative
and has no conformance statements. Because of its scalable, geometric
nature, SVG is inherently better suited to print than raster image formats.
The same geometry can be displayed on screen and on a printer, with
identical layout in both but taking advantage of the higher resolution
of print media. The same colors can be output, using an ICC-based color
managed workflow on the printer and an sRGB fallback approximation on
screen. This has been true since SVG 1.0, and so SVG has been used in
print workflows (for example, in combination with XSL FO) as well as on
screen. However, SVG also has dynamic, interactive features such as
declarative animation, scripting, timed elements like audio and video,
and user interaction such as event flow and link activation. None of these
are applicable to a print context. SVG 1.1 gives static and dynamic
conformance classes, but further guidance on what exactly SVG Printers
should do with such general content is helpful. The SVG Print
specification defines processing rules for handling such general purpose
content which was not designed to be printed, but which may be
encountered anyhow. It is possible to generate SVG which is exclusively
intended for print (for example, a printer which natively understands SVG).
This content might be created in an illustration program, or it might
be an output from a layout program, such as an XSL-FO renderer; or it
might be generated by an SVG Print driver. W3C's Graphics Activity has
been developing graphics specifications for over ten years: "Scalable
Vector Graphics (SVG), the current effort of the Activity, brings the
powerful combination of interactive, animated two-dimensional vector
graphics and Extensible Markup Language (XML). WebCGM 2.0 is used mainly
in industrial and defence technical documents. Earlier work was concerned
with Portable Network Graphics (PNG) and with WebCGM 1.0."
Using Intelligence Community Security Markings (IC-ISM) with NIEM
NIEM (National Information Exchange Model) is a partnership of the
U.S. Department of Justice and the Department of Homeland Security.
Developers recently announced an interim solution designed to allow
users to use IC-ISM within NIEM 2.0. The IC-ISM standard is an XML
Schema described in the IC-ISM Data Element Dictionary and the
Implementation Guide. It is one of the Intelligence Community (IC)
Metadata Standards for Information Assurance and is the preferred
way to apply information security markings within XML instances. Until
recently, the schema for the Intelligence Community Information
Security Marking (IC-ISM) standard was considered for official use
only (FOUO) and could not be published. Therefore, NIEM 2.0 could not
integrate components of IC-ISM without publishing the IC-ISM schema.
Actions have now been taken to restore the ability to use IC-ISM within
NIEM 2.0 and future releases. To facilitate the preferred (future)
use of the IC-ISM standard in NIEM will require, in sequence:
(1) Completion of the NIEM versioning architecture; (2) A
forward-compatible release update to NIEM 2.0; (3) Minor change(s)
to the NIEM NDR; (4) Governance Committee review and approval.
U.S. Department of Justice and the Department of Homeland Security.
Developers recently announced an interim solution designed to allow
users to use IC-ISM within NIEM 2.0. The IC-ISM standard is an XML
Schema described in the IC-ISM Data Element Dictionary and the
Implementation Guide. It is one of the Intelligence Community (IC)
Metadata Standards for Information Assurance and is the preferred
way to apply information security markings within XML instances. Until
recently, the schema for the Intelligence Community Information
Security Marking (IC-ISM) standard was considered for official use
only (FOUO) and could not be published. Therefore, NIEM 2.0 could not
integrate components of IC-ISM without publishing the IC-ISM schema.
Actions have now been taken to restore the ability to use IC-ISM within
NIEM 2.0 and future releases. To facilitate the preferred (future)
use of the IC-ISM standard in NIEM will require, in sequence:
(1) Completion of the NIEM versioning architecture; (2) A
forward-compatible release update to NIEM 2.0; (3) Minor change(s)
to the NIEM NDR; (4) Governance Committee review and approval.
WSO2 Registry Version 0.1
On behalf of the WSO2 Registry team, Paul Fremantle announced the version
0.1 release of the WSO2 Registry. "This early release demonstrates a
completely REST-based approach to storing, searching and managing SOA
metadata. The Registry stores any kind of resource in a simple JDBC
driven store, and uses AtomPub as a web API to allow publishing and
searching. The Registry has been deliberately designed to bring social
interaction to the world of SOA metadata by including tagging, comments,
rating and a wiki-like approach to SOA registries... WSO2 Registry
enables you to store, catalog, index and manage your enterprise meta
data in a simple, scalable and easy-to-use model. It is designed around
community concepts such as tags, comments, ratings, users and roles.
Think of the registry as a structured wiki designed to help you manage
your meta-data in a simple business-friendly system. In addition, the
registry allows you to store more unstructured data such as Word
documents, Excel spreadsheets and text formats. Using these approaches,
you can build a catalog of enterprise information ranging from services,
service descriptions to employee data and on going projects. WSO2
Registry can be deployed in Application Servers and access using the
Web UI or the APP interface. It can also be used as a Java library
inside other Java programs as a resource store with all community
features and versioning."
0.1 release of the WSO2 Registry. "This early release demonstrates a
completely REST-based approach to storing, searching and managing SOA
metadata. The Registry stores any kind of resource in a simple JDBC
driven store, and uses AtomPub as a web API to allow publishing and
searching. The Registry has been deliberately designed to bring social
interaction to the world of SOA metadata by including tagging, comments,
rating and a wiki-like approach to SOA registries... WSO2 Registry
enables you to store, catalog, index and manage your enterprise meta
data in a simple, scalable and easy-to-use model. It is designed around
community concepts such as tags, comments, ratings, users and roles.
Think of the registry as a structured wiki designed to help you manage
your meta-data in a simple business-friendly system. In addition, the
registry allows you to store more unstructured data such as Word
documents, Excel spreadsheets and text formats. Using these approaches,
you can build a catalog of enterprise information ranging from services,
service descriptions to employee data and on going projects. WSO2
Registry can be deployed in Application Servers and access using the
Web UI or the APP interface. It can also be used as a Java library
inside other Java programs as a resource store with all community
features and versioning."
XML Moves to mySQL
The unification of XML and SQL relational data has taken another
significant step forward recently with the introduction of significant
new XML functionality in mySQL, the world's most popular open source
database. In versions 5.1 and 6.0, mySQL adds the ability to retrieve
tables (and JOINS) as XML results, to retrieve SQL schemas as XML files,
to both select content via a subset of XPath and to update content using
similar functions, and the like. I think the ramifications for this are
actually quite huge. I've known for some time that much of the driving
technology behind Web 2.0? is the power of SQL databases, with the bulk
of those to date being mySQL databases. While enterprise level databases
such as Oracle 10i+, IBM db2, and Microsoft SQL Server have long had XML
capabilities, they also account collectively for a surprisingly small
amount of the outward facing databases on the web, especially compared
to mySQL. However, this has also has the unfortunate effect of promoting
a relational database model as the prime one for the web, diminishing
the utility of XML there and increasing the fragility of Web 2.0
applications. With native XML support moving into mySQL, it opens up a
chance for XML developers to start working within that community, and
and also raises some significant issues with regard to how unstructured
and semi-structured data is stored, retrieved and manipulated... The XML
support for mySQL is not yet at the level where it can support XQuery,
but I think that this will come in time given the degree of support they
have for the XPath specification. Keep an eye on this development.
Related reference: Jon Stephens, "Using XML in MySQL 5.1 and 6.0." More Information See also Jon Stephens' article: Click Here
significant step forward recently with the introduction of significant
new XML functionality in mySQL, the world's most popular open source
database. In versions 5.1 and 6.0, mySQL adds the ability to retrieve
tables (and JOINS) as XML results, to retrieve SQL schemas as XML files,
to both select content via a subset of XPath and to update content using
similar functions, and the like. I think the ramifications for this are
actually quite huge. I've known for some time that much of the driving
technology behind Web 2.0? is the power of SQL databases, with the bulk
of those to date being mySQL databases. While enterprise level databases
such as Oracle 10i+, IBM db2, and Microsoft SQL Server have long had XML
capabilities, they also account collectively for a surprisingly small
amount of the outward facing databases on the web, especially compared
to mySQL. However, this has also has the unfortunate effect of promoting
a relational database model as the prime one for the web, diminishing
the utility of XML there and increasing the fragility of Web 2.0
applications. With native XML support moving into mySQL, it opens up a
chance for XML developers to start working within that community, and
and also raises some significant issues with regard to how unstructured
and semi-structured data is stored, retrieved and manipulated... The XML
support for mySQL is not yet at the level where it can support XQuery,
but I think that this will come in time given the degree of support they
have for the XPath specification. Keep an eye on this development.
Related reference: Jon Stephens, "Using XML in MySQL 5.1 and 6.0." More Information See also Jon Stephens' article: Click Here
W3C Drafts for XML Interchange (EXI): Format, Best Practices, Primer
W3C's Efficient XML Interchange (EXI) Working Group recently published
three documents. First Public Working Drafts have been issued for
"Efficient XML Interchange (EXI) Best Practice" and "Efficient XML
Interchange (EXI) Primer." An updated WD is available for the "Efficient
XML Interchange (EXI) Format 1.0" specification. Efficient XML
Interchange (EXI) is a very compact representation for the Extensible
Markup Language (XML) Information Set that is intended to simultaneously
optimize performance and the utilization of computational resources.
The EXI format uses a hybrid approach drawn from the information and
formal language theories, plus practical techniques verified by
measurements, for entropy encoding XML information. Using a relatively
simple algorithm, which is amenable to fast and compact implementation,
and a small set of data types, it reliably produces efficient encodings
of XML event streams. The event production system and format definition
of EXI are presented. The "Best Practices" document provides explanations
of format features and techniques to support interoperable information
exchanges using EXI. While intended primarily as a practical guide for
systems architects and programmers, it also presents information suitable
for the general reader interested in EXI's intended role in the expanding
Web. The "EXI Primer" a non-normative document intended to provide an
easily readable technical background on the Efficient XML Interchange
(EXI) format. It is oriented towards quickly understanding how the EXI
format can be used in practice and how options can be set to achieve
specific needs. Section 2 "Concepts" describes the structure of an EXI
document and introduces the notions of EXI header, EXI body and EXI
grammar which are fundamental to the understanding of the EXI format.
Additional details about data type representation, compression, and
their interaction with other format features are presented. Section 3
"Efficient XML Interchange by Example" provides a detailed, bit-level
description of a schema-less example. More Information See also the W3C Efficient XML Interchange Working Group: Click Here
three documents. First Public Working Drafts have been issued for
"Efficient XML Interchange (EXI) Best Practice" and "Efficient XML
Interchange (EXI) Primer." An updated WD is available for the "Efficient
XML Interchange (EXI) Format 1.0" specification. Efficient XML
Interchange (EXI) is a very compact representation for the Extensible
Markup Language (XML) Information Set that is intended to simultaneously
optimize performance and the utilization of computational resources.
The EXI format uses a hybrid approach drawn from the information and
formal language theories, plus practical techniques verified by
measurements, for entropy encoding XML information. Using a relatively
simple algorithm, which is amenable to fast and compact implementation,
and a small set of data types, it reliably produces efficient encodings
of XML event streams. The event production system and format definition
of EXI are presented. The "Best Practices" document provides explanations
of format features and techniques to support interoperable information
exchanges using EXI. While intended primarily as a practical guide for
systems architects and programmers, it also presents information suitable
for the general reader interested in EXI's intended role in the expanding
Web. The "EXI Primer" a non-normative document intended to provide an
easily readable technical background on the Efficient XML Interchange
(EXI) format. It is oriented towards quickly understanding how the EXI
format can be used in practice and how options can be set to achieve
specific needs. Section 2 "Concepts" describes the structure of an EXI
document and introduces the notions of EXI header, EXI body and EXI
grammar which are fundamental to the understanding of the EXI format.
Additional details about data type representation, compression, and
their interaction with other format features are presented. Section 3
"Efficient XML Interchange by Example" provides a detailed, bit-level
description of a schema-less example. More Information See also the W3C Efficient XML Interchange Working Group: Click Here
A Document Format for Expressing Authorization Policies to Tackle Spam
Members of the IETF SIPPING Working Group have published an updated draft
defining SPIT authorization documents that use SAML. The problem of SPAM
for Internet Telephony (SPIT) is an imminent challenge and only the
combination of several techniques can provide a framework for dealing
with unwanted communication. The responsibility for filtering or blocking
calls can belong to different elements in the call flow and may depend
on various factors. This document defines an authorization based policy
language that allows end users to upload anti-SPIT policies to
intermediaries, such as SIP proxies. These policies mitigate unwanted SIP
communications. It extends the Common Policy authorization framework with
additional conditions and actions. The new conditions match a particular
Session Initiation Protocol (SIP) communication pattern based on a number
of attributes. The range of attributes includes information provided, for
example, by SIP itself, by the SIP identity mechanism, by information
carried within SAML assertions... A SPIT authorization document is an
XML document, formatted according to the schema defined in RFC 4745.
SPIT authorization documents inherit the MIME type of common policy
documents, application/auth-policy+xml. As described in RFC 4745, this
document is composed of rules which contain three parts -- conditions,
actions, and transformations. Each action or transformation, which is
also called a permission, has the property of being a positive grant to
the authorization server to perform the resulting actions, be it allow,
block etc . As a result, there is a well-defined mechanism for combining
actions and transformations obtained from several sources. This
mechanism therefore can be used to filter connection attempts thus
leading to effective SPIT prevention... Policies are XML documents that
are stored at a Proxy Server or a dedicated device. The Rule Maker
therefore needs to use a protocol to create, modify and delete the
authorization policies defined in this document. Such a protocol is
available with the Extensible Markup Language (XML) Configuration
Access Protocol (XCAP), per RFC 4825..." More Information See also SAML references: Click Here
defining SPIT authorization documents that use SAML. The problem of SPAM
for Internet Telephony (SPIT) is an imminent challenge and only the
combination of several techniques can provide a framework for dealing
with unwanted communication. The responsibility for filtering or blocking
calls can belong to different elements in the call flow and may depend
on various factors. This document defines an authorization based policy
language that allows end users to upload anti-SPIT policies to
intermediaries, such as SIP proxies. These policies mitigate unwanted SIP
communications. It extends the Common Policy authorization framework with
additional conditions and actions. The new conditions match a particular
Session Initiation Protocol (SIP) communication pattern based on a number
of attributes. The range of attributes includes information provided, for
example, by SIP itself, by the SIP identity mechanism, by information
carried within SAML assertions... A SPIT authorization document is an
XML document, formatted according to the schema defined in RFC 4745.
SPIT authorization documents inherit the MIME type of common policy
documents, application/auth-policy+xml. As described in RFC 4745, this
document is composed of rules which contain three parts -- conditions,
actions, and transformations. Each action or transformation, which is
also called a permission, has the property of being a positive grant to
the authorization server to perform the resulting actions, be it allow,
block etc . As a result, there is a well-defined mechanism for combining
actions and transformations obtained from several sources. This
mechanism therefore can be used to filter connection attempts thus
leading to effective SPIT prevention... Policies are XML documents that
are stored at a Proxy Server or a dedicated device. The Rule Maker
therefore needs to use a protocol to create, modify and delete the
authorization policies defined in this document. Such a protocol is
available with the Extensible Markup Language (XML) Configuration
Access Protocol (XCAP), per RFC 4825..." More Information See also SAML references: Click Here
ACL Data Model for NETCONF
Members of the IETF Network Configuration (NETCONF) Working Group
have published a draft "ACL Data Model for NETCONF." The Working
Group was chartered to produce a protocol for network configuration
that uses XML for data encoding purposes: "Configuration of networks
of devices has become a critical requirement for operators in today's
highly interoperable networks. Operators from large to small have
developed their own mechanisms or used vendor specific mechanisms to
transfer configuration data to and from a device, and for examining
device state information which may impact the configuration..." The
"ACL Data Model" document introduces a data model developed by the
authors so that it facilitates discussion of data model which NETCONF
protocol carry. Data modeling of configuration data of each network
function is necessary in order to achieve interoperability among NETCONF
entities. For that purpose, the authors devised an ACL data model and
developed a network configuration application using that data model...
The data model was originally designed in a style of UML (Unified
Modeling Language) class diagram. From the class diagram ACL's XML
schema can be generated; the configuration data are sent in a style
conforming to this XML schema. The configuration application developed
using the ACL data model can open and read the file. Then, the
configuration application reads the lists of ACL line by line and
transforms them into a NETCONF request message conforming to the XML
schema listed before. And the configuration application sends the
NETCONF request message and configures the network device accordingly...
When we exchange NETCONF messages based on the data model we proposed,
security should be taken care of. WS-Security can achieve secure data
transportation by utilizing XML Signature, XML Encryption mechanism...." More Information
have published a draft "ACL Data Model for NETCONF." The Working
Group was chartered to produce a protocol for network configuration
that uses XML for data encoding purposes: "Configuration of networks
of devices has become a critical requirement for operators in today's
highly interoperable networks. Operators from large to small have
developed their own mechanisms or used vendor specific mechanisms to
transfer configuration data to and from a device, and for examining
device state information which may impact the configuration..." The
"ACL Data Model" document introduces a data model developed by the
authors so that it facilitates discussion of data model which NETCONF
protocol carry. Data modeling of configuration data of each network
function is necessary in order to achieve interoperability among NETCONF
entities. For that purpose, the authors devised an ACL data model and
developed a network configuration application using that data model...
The data model was originally designed in a style of UML (Unified
Modeling Language) class diagram. From the class diagram ACL's XML
schema can be generated; the configuration data are sent in a style
conforming to this XML schema. The configuration application developed
using the ACL data model can open and read the file. Then, the
configuration application reads the lists of ACL line by line and
transforms them into a NETCONF request message conforming to the XML
schema listed before. And the configuration application sends the
NETCONF request message and configures the network device accordingly...
When we exchange NETCONF messages based on the data model we proposed,
security should be taken care of. WS-Security can achieve secure data
transportation by utilizing XML Signature, XML Encryption mechanism...." More Information
Electricity Costs Attacked Through XML
A power consortium that distributes a mix of "green" and conventional
electricity is implementing an XML-based settlements system that drives
costs out of power distribution. The Northern California Power Agency
is one of several state-chartered coordinators in California that
schedules the delivery of power to the California power grid then
settles the payment due the supplier. NCPA sells the power generated
by the cities of Palo Alto and Santa Clara, as well as hydro and
geothermal sources farther north. Power settlements are a highly
regulated and complicated process. Each settlement statement, which
can be 100 Mbytes of data, contains how much power a particular supplier
delivered and how much was used by commercial vs. residential customers.
The two have different rates of payment, set by the Public Utilities
Commission. The settlements are complicated by the fact that electricity
meters are read only once every 90 days; many settlements must be based
on an estimate of consumption that gets revised as meter readings come
in. On top of that, there are fees for transmission across the grid,
sometimes set by the PUC to apply retroactively. On behalf of a supplier,
NCPA can protest that fees for transmission usage weren't calculated
correctly, and the dispute requires a review of all relevant data. NCPA
sought these vendor bids three years ago and received quotes that were
"several hundred thousand dollars a year in licensing fees and ongoing
maintenance," said Caracristi. The need for services from these customized
systems adds to the cost of power consumption for every California
consumer. Faced with such a large annual expense, NCPA sought instead
to develop the in-house expertise to deal with the statements. Senior
programmer analyst Carlo Tiu and his team at NCPA used Oracle's XML
handling capabilities gained in the second release of 10g, a feature
known as Oracle XML DB. They developed an XML schema that allowed Oracle
to handle the data and an XML configuration file that contained the
rules for determining supplier payment from the data. That file can be
regularly updated, without needing to modify the XML data itself. More Information
electricity is implementing an XML-based settlements system that drives
costs out of power distribution. The Northern California Power Agency
is one of several state-chartered coordinators in California that
schedules the delivery of power to the California power grid then
settles the payment due the supplier. NCPA sells the power generated
by the cities of Palo Alto and Santa Clara, as well as hydro and
geothermal sources farther north. Power settlements are a highly
regulated and complicated process. Each settlement statement, which
can be 100 Mbytes of data, contains how much power a particular supplier
delivered and how much was used by commercial vs. residential customers.
The two have different rates of payment, set by the Public Utilities
Commission. The settlements are complicated by the fact that electricity
meters are read only once every 90 days; many settlements must be based
on an estimate of consumption that gets revised as meter readings come
in. On top of that, there are fees for transmission across the grid,
sometimes set by the PUC to apply retroactively. On behalf of a supplier,
NCPA can protest that fees for transmission usage weren't calculated
correctly, and the dispute requires a review of all relevant data. NCPA
sought these vendor bids three years ago and received quotes that were
"several hundred thousand dollars a year in licensing fees and ongoing
maintenance," said Caracristi. The need for services from these customized
systems adds to the cost of power consumption for every California
consumer. Faced with such a large annual expense, NCPA sought instead
to develop the in-house expertise to deal with the statements. Senior
programmer analyst Carlo Tiu and his team at NCPA used Oracle's XML
handling capabilities gained in the second release of 10g, a feature
known as Oracle XML DB. They developed an XML schema that allowed Oracle
to handle the data and an XML configuration file that contained the
rules for determining supplier payment from the data. That file can be
regularly updated, without needing to modify the XML data itself. More Information
CCTS 2.01 Data Type Catalogue
When the Core Components Technical Specification (CCTS) Version 2.01
was published by UN/CEFACT in 2003, it contained a list of 10 Core
Component Types, 20 primary and secondary representation terms, and
supporting Content and Supplementary Components. The Core Component
Types were simple data types that were intended to be used as the basis
for the development of data types to express the value domain for CCTS
leaf elements (Basic Core Components and Basic Business Information
Entities). It was envisioned that the 10 CCTs and the 20 Representation
Terms would be used to create a set of 20 unqualified data types and an
unlimited amount of qualified (more restricted) data types. It was
also envisioned that future updates to the data types would be published
independently of the CCTS specification. The recently published CCTS 2.01
Data Type Catalogue delivers on those expectations. It republishes the
CCTs, Representation Terms, Content and Supplementary Components, and
allowed restrictions by primitive data type that were contained in CCTS
2.01. It also, for the first time, publishes the full set of 20
unqualified data types that were implicitly expressed in CCTS 2.01.
These data types have also been expressed as XML schema in support of
the UN/CEFACT XML NDR standard. The UN/CEFACT Applied Technologies
Group is responsible for maintaining changes to the data type catalogue
and has provided a Data Maintenance Request form for interested parties
to submit their requested changes. ATG is also working on the CCTS 3.0
data type catalogue which expands the number of data types, and also
looks at closer alignment with the data types of the W3C XSD specification.
SAP actively participates in the development and maintenance of these
data types, and has contributed a number of additional unqualified data
types that are under consideration within UN/CEFACT. Additionally, these
unqualified (or Core) data types are the lowest level of data
interoperability being created across a wide variety of individual
business standards development organizations such as ACORD, CIDX, OAGi,
RosettaNet, UBL and others who have adopted, or are in the process of
adopting, CCTS and its supporting data types.
was published by UN/CEFACT in 2003, it contained a list of 10 Core
Component Types, 20 primary and secondary representation terms, and
supporting Content and Supplementary Components. The Core Component
Types were simple data types that were intended to be used as the basis
for the development of data types to express the value domain for CCTS
leaf elements (Basic Core Components and Basic Business Information
Entities). It was envisioned that the 10 CCTs and the 20 Representation
Terms would be used to create a set of 20 unqualified data types and an
unlimited amount of qualified (more restricted) data types. It was
also envisioned that future updates to the data types would be published
independently of the CCTS specification. The recently published CCTS 2.01
Data Type Catalogue delivers on those expectations. It republishes the
CCTs, Representation Terms, Content and Supplementary Components, and
allowed restrictions by primitive data type that were contained in CCTS
2.01. It also, for the first time, publishes the full set of 20
unqualified data types that were implicitly expressed in CCTS 2.01.
These data types have also been expressed as XML schema in support of
the UN/CEFACT XML NDR standard. The UN/CEFACT Applied Technologies
Group is responsible for maintaining changes to the data type catalogue
and has provided a Data Maintenance Request form for interested parties
to submit their requested changes. ATG is also working on the CCTS 3.0
data type catalogue which expands the number of data types, and also
looks at closer alignment with the data types of the W3C XSD specification.
SAP actively participates in the development and maintenance of these
data types, and has contributed a number of additional unqualified data
types that are under consideration within UN/CEFACT. Additionally, these
unqualified (or Core) data types are the lowest level of data
interoperability being created across a wide variety of individual
business standards development organizations such as ACORD, CIDX, OAGi,
RosettaNet, UBL and others who have adopted, or are in the process of
adopting, CCTS and its supporting data types.
Technical Comparison: OpenID and SAML
This document presents a technical comparison of the OpenID
Authentication protocol and the Security Assertion Markup Language
(SAML) Web Browser SSO Profile and the SAML framework itself. Topics
addressed include design centers, terminology, specification set
contents and scope, user identifier treatment, web single sign-on
profiles, trust, security, identity provider discovery mechanisms, key
agreement approaches, as well as message formats and protocol bindings.
An executive summary targeting various audiences, and presented from
the perspectives of end-users, implementors, and deployers, is provided.
We do not attempt to assign relative value between OpenID and SAML,
e.g., which is 'better'; rather, it attempts to present an objective
technical comparison... OpenID 1.X and 2.0, and SAML 2.0's Web Browser
SSO Profile (and earlier versions thereof), offer functionality quite
similar to each other. Obvious differentiators to a protocol designer
are the message encodings, security mechanisms, and overall profile
flows. Other differentiators include the layout and scope of the
specification, trust and security aspects, OP/IDP discovery mechanisms,
user-visible features such as identifier treatment, key agreement
provisions, and security assertion schema and features..."
Authentication protocol and the Security Assertion Markup Language
(SAML) Web Browser SSO Profile and the SAML framework itself. Topics
addressed include design centers, terminology, specification set
contents and scope, user identifier treatment, web single sign-on
profiles, trust, security, identity provider discovery mechanisms, key
agreement approaches, as well as message formats and protocol bindings.
An executive summary targeting various audiences, and presented from
the perspectives of end-users, implementors, and deployers, is provided.
We do not attempt to assign relative value between OpenID and SAML,
e.g., which is 'better'; rather, it attempts to present an objective
technical comparison... OpenID 1.X and 2.0, and SAML 2.0's Web Browser
SSO Profile (and earlier versions thereof), offer functionality quite
similar to each other. Obvious differentiators to a protocol designer
are the message encodings, security mechanisms, and overall profile
flows. Other differentiators include the layout and scope of the
specification, trust and security aspects, OP/IDP discovery mechanisms,
user-visible features such as identifier treatment, key agreement
provisions, and security assertion schema and features..."
Five Things You'll Love About Firefox Version 3
Although the basic look of the Firefox 3 Beta 2 browser hasn't changed,
there are actually quite a few new features coming. For a complete list,
you can check out Mozilla's release notes. Some of the new features in
Firefox 3 are not immediately obvious -- at least, not to the casual
user. Among other things, Mozilla is incorporating new graphics- and
text-rendering architectures in its browser layout engine (Gecko 1.9)
to offer rendering improvements in CSS and SVG; adding a number of
security features, including malware protection and version checks of
its add-ons; and offline support for suitably coded Web applications.
(1) Easier downloads: While the older Download Manager was quite
serviceable, Mozilla has made some nice tweaks in the new version. It
now lists not only the file name, but the URL it was downloaded from,
and includes an icon that leads to information about when and where you
downloaded it. The new feature I really approve of is the ability to
resume a download that may have been abruptly stopped because Firefox,
or your system, crashed. (2) Enhanced address bar: In Firefox 3 Beta 2,
the autocomplete doesn't just offer a list of URLs that you've been to,
but includes sites that are in your bookmark list; it then gives you a
nice, clear listing of the URLs and site names in large, easy-to-read
text, with the typed-in phrase underlined. (3) A workable bookmark
organizer: The new Places Organizer vastly improves Firefox's management
of bookmark lists. (4) Easier bookmarking: You can now quickly create
a bookmark by double-clicking on a star that appears in the right side
of the address bar; you can also add tags to your bookmarks, which
could work nicely as an organizational tool. (5) Better memory management:
The new version of Firefox appears to have a smaller memory footprint
than its predecessor. ["Beta 2 includes over 30 more memory leak fixes,
and 11 improvements to the memory footprint."]
there are actually quite a few new features coming. For a complete list,
you can check out Mozilla's release notes. Some of the new features in
Firefox 3 are not immediately obvious -- at least, not to the casual
user. Among other things, Mozilla is incorporating new graphics- and
text-rendering architectures in its browser layout engine (Gecko 1.9)
to offer rendering improvements in CSS and SVG; adding a number of
security features, including malware protection and version checks of
its add-ons; and offline support for suitably coded Web applications.
(1) Easier downloads: While the older Download Manager was quite
serviceable, Mozilla has made some nice tweaks in the new version. It
now lists not only the file name, but the URL it was downloaded from,
and includes an icon that leads to information about when and where you
downloaded it. The new feature I really approve of is the ability to
resume a download that may have been abruptly stopped because Firefox,
or your system, crashed. (2) Enhanced address bar: In Firefox 3 Beta 2,
the autocomplete doesn't just offer a list of URLs that you've been to,
but includes sites that are in your bookmark list; it then gives you a
nice, clear listing of the URLs and site names in large, easy-to-read
text, with the typed-in phrase underlined. (3) A workable bookmark
organizer: The new Places Organizer vastly improves Firefox's management
of bookmark lists. (4) Easier bookmarking: You can now quickly create
a bookmark by double-clicking on a star that appears in the right side
of the address bar; you can also add tags to your bookmarks, which
could work nicely as an organizational tool. (5) Better memory management:
The new version of Firefox appears to have a smaller memory footprint
than its predecessor. ["Beta 2 includes over 30 more memory leak fixes,
and 11 improvements to the memory footprint."]
Friday, December 21, 2007
Implementing Healthcare Messaging with XML
At XML 2007, Marc de Graauw provided an overview of the national EHR
being set up in the Netherlands. It uses XML, HL7v3 and Web Services.
He takes a look at lessons learned and the pitfalls to be avoided:
(1) Schemas serve multiple masters -- design, validation, contract,
code generation. And those purposes don't play together well. Write
flat, simple Schemas. Those are understandable and generate
understandable code. Don't design Schemas for reuse. Use a simple
spreadsheet format instead as your baseline. And tweak your Schemas
with XSLT before generating code. After all, they're just XML. (2)
Use a layered approach. Anything beyond Celsius-to-Fahrenheit will not
be a monolithic Web Service. So anonymize payloads with 'xs:any' to
generate stubs and make Schema's which describe just one software layer.
This ensures reuse, and stacks nicely on top of the Internet stack...
(3) Make examples everywhere: hand-write XML messages, and use those
to develop and test services. XML based message exchanges are hard,
and documentation for them gets large. Example XML messages are required
to keep everyone sane. And make your messages wrong -- see how
applications handle all kinds of common mistakes. (4) Do a lot of
HTTP work: specify HTTP status codes, when to use which codes in
combination with higher level (SOAP, HL7v3) error codes. (5) Profile
the profiles! Don't simply use WS-I Basic Profile and Security Profile,
but write your own lean profiles -- skin them till only what's really
needed is left. Plenty of options means plenty of interoperability
problems... profiling possibilities on top of WS-ReliableMessaging
and WS-Security. More Information
being set up in the Netherlands. It uses XML, HL7v3 and Web Services.
He takes a look at lessons learned and the pitfalls to be avoided:
(1) Schemas serve multiple masters -- design, validation, contract,
code generation. And those purposes don't play together well. Write
flat, simple Schemas. Those are understandable and generate
understandable code. Don't design Schemas for reuse. Use a simple
spreadsheet format instead as your baseline. And tweak your Schemas
with XSLT before generating code. After all, they're just XML. (2)
Use a layered approach. Anything beyond Celsius-to-Fahrenheit will not
be a monolithic Web Service. So anonymize payloads with 'xs:any' to
generate stubs and make Schema's which describe just one software layer.
This ensures reuse, and stacks nicely on top of the Internet stack...
(3) Make examples everywhere: hand-write XML messages, and use those
to develop and test services. XML based message exchanges are hard,
and documentation for them gets large. Example XML messages are required
to keep everyone sane. And make your messages wrong -- see how
applications handle all kinds of common mistakes. (4) Do a lot of
HTTP work: specify HTTP status codes, when to use which codes in
combination with higher level (SOAP, HL7v3) error codes. (5) Profile
the profiles! Don't simply use WS-I Basic Profile and Security Profile,
but write your own lean profiles -- skin them till only what's really
needed is left. Plenty of options means plenty of interoperability
problems... profiling possibilities on top of WS-ReliableMessaging
and WS-Security. More Information
AirTran Becomes First U.S. Carrier to Use Sabre XML Interface
AirTran Airways last week began displaying seat maps through Sabre's
Extensible Markup Language (XML) interface, and plans to add additional
booking options through the global distribution system early next year.
Sabre vice president of product marketing Kyle Moore said the XML
interface allows the GDS to display travel content not generally
enabled through traditional legacy systems. Through XML, Sabre can
tap into airlines' Web-based reservations systems and display and sell
air content in a manner closer to how airlines sell and distribute
through their Web sites, Moore said. Though Sabre has been using XML
for years now to link with other travel suppliers, including Expedia
and hotel companies, Moore said AirTran is the first major airline to
adopt the link: "XML is far more flexible than technologies that we and
travel suppliers have used in the past. It allows us to do things that
we previously were not able to do. Carriers can use an XML connection
to sell ancillary services, unbundle fare options and (like AirTran)
show seat maps and more detailed flight information through the global
distribution system. As carriers introduce new things, they're generally
not building them in legacy technologies. This is a platform that can
support traditional types of transactions using new technology or
nontraditional types of transactions in environments in which they
may want them to work." AirTran early next year will launch additional
booking features through Sabre's XML link, according to Moore, who
said that other airlines also are in discussions to hook up through XML. More Information
Extensible Markup Language (XML) interface, and plans to add additional
booking options through the global distribution system early next year.
Sabre vice president of product marketing Kyle Moore said the XML
interface allows the GDS to display travel content not generally
enabled through traditional legacy systems. Through XML, Sabre can
tap into airlines' Web-based reservations systems and display and sell
air content in a manner closer to how airlines sell and distribute
through their Web sites, Moore said. Though Sabre has been using XML
for years now to link with other travel suppliers, including Expedia
and hotel companies, Moore said AirTran is the first major airline to
adopt the link: "XML is far more flexible than technologies that we and
travel suppliers have used in the past. It allows us to do things that
we previously were not able to do. Carriers can use an XML connection
to sell ancillary services, unbundle fare options and (like AirTran)
show seat maps and more detailed flight information through the global
distribution system. As carriers introduce new things, they're generally
not building them in legacy technologies. This is a platform that can
support traditional types of transactions using new technology or
nontraditional types of transactions in environments in which they
may want them to work." AirTran early next year will launch additional
booking features through Sabre's XML link, according to Moore, who
said that other airlines also are in discussions to hook up through XML. More Information
Manage an HTTP Server Using RESTful Interfaces and Project Zero
WS-* users and REST users have an ongoing debate over which technique
is most appropriate for which problem sets, with WS-* users often
claiming that more complex, enterprise-level problems cannot be solved
RESTfully. This article puts that theory to the test by trying to
create a RESTful solution for a problem area that is not often
discussed by REST users: systems management. The article shows how to
make a Zero-based RESTful interface for httpd that is as functionally
complete, comparable to an Apache Muse-based WS-* version. The
combination of Groovy scripts and RESTdoc comments provides the same
features and behavior as we had with Java classes and WSDL and
demonstrates that REST can handle the tasks that are thought to be
"too complicated" for HTTP alone. The REST and WS-* solutions each
have their pros and cons, and which one you favor may change from
project to project. The article not about enumerating the pros and
cons of WS-* technology versus REST-oriented technology, and it is
not out to select a "winner." The goal of the article is to demonstrate
whether or not REST and Web 2.0 development techniques provide a
productive alternative for systems management projects and hopefully
give developers some additional choices. WS-* users and REST users
have an ongoing debate over which technique is most appropriate for
which problem sets, with WS-* users often claiming that more complex,
enterprise-level problems cannot be solved RESTfully.
is most appropriate for which problem sets, with WS-* users often
claiming that more complex, enterprise-level problems cannot be solved
RESTfully. This article puts that theory to the test by trying to
create a RESTful solution for a problem area that is not often
discussed by REST users: systems management. The article shows how to
make a Zero-based RESTful interface for httpd that is as functionally
complete, comparable to an Apache Muse-based WS-* version. The
combination of Groovy scripts and RESTdoc comments provides the same
features and behavior as we had with Java classes and WSDL and
demonstrates that REST can handle the tasks that are thought to be
"too complicated" for HTTP alone. The REST and WS-* solutions each
have their pros and cons, and which one you favor may change from
project to project. The article not about enumerating the pros and
cons of WS-* technology versus REST-oriented technology, and it is
not out to select a "winner." The goal of the article is to demonstrate
whether or not REST and Web 2.0 development techniques provide a
productive alternative for systems management projects and hopefully
give developers some additional choices. WS-* users and REST users
have an ongoing debate over which technique is most appropriate for
which problem sets, with WS-* users often claiming that more complex,
enterprise-level problems cannot be solved RESTfully.
Firefox 3 Beta 2 Arrives Early
In Mozilla's Firefox 3 Beta 2 release, Mozilla developers have improved
security and performance as well as functionality. In total, Mozilla
boasts in its release notes that some 900 improvements were made in
Beta 2 over the Beta 1 release, which came out about a month ago. Many
improvements are focused on how Firefox handles memory. Firefox developer
Mike Beltzner claimed in a mailing list posting of over 330 memory leak
fixes. Memory handling and leakage issues have been a high priority item
for Mozilla developers throughout the Firefox 3 process. Firefox 3 Beta
2 also fixes leaks in how the browser handles JSON (JavaScript Object
Notation) cross site requests, making the browser more secure. JSON is
often used in Ajax web development and is an alternative to XML over
HTTP (XHR) Requests. Security is further enhanced with anti-virus
integration in Firefox's download manager. Beta 2 also improves on the
security of plugins by implementing a version check to identify plugins
that are not secure. Mozilla has also taken steps to further improve
its Places bookmarking and history system which is a major new feature
of the Firefox 3 browser. The Places system was originally intended to
be part of the Firefox 2 release but wasn't ready in time. It has been
part of the Firefox 3 development cycle since at least the Alpha 5
release in June. Fundamentally, Places makes it easy to create, manage
and use bookmarks and history information.
security and performance as well as functionality. In total, Mozilla
boasts in its release notes that some 900 improvements were made in
Beta 2 over the Beta 1 release, which came out about a month ago. Many
improvements are focused on how Firefox handles memory. Firefox developer
Mike Beltzner claimed in a mailing list posting of over 330 memory leak
fixes. Memory handling and leakage issues have been a high priority item
for Mozilla developers throughout the Firefox 3 process. Firefox 3 Beta
2 also fixes leaks in how the browser handles JSON (JavaScript Object
Notation) cross site requests, making the browser more secure. JSON is
often used in Ajax web development and is an alternative to XML over
HTTP (XHR) Requests. Security is further enhanced with anti-virus
integration in Firefox's download manager. Beta 2 also improves on the
security of plugins by implementing a version check to identify plugins
that are not secure. Mozilla has also taken steps to further improve
its Places bookmarking and history system which is a major new feature
of the Firefox 3 browser. The Places system was originally intended to
be part of the Firefox 2 release but wasn't ready in time. It has been
part of the Firefox 3 development cycle since at least the Alpha 5
release in June. Fundamentally, Places makes it easy to create, manage
and use bookmarks and history information.
ASCII Escaping of Unicode Characters
The Internet Engineering Steering Group has announced the publication
of "ASCII Escaping of Unicode Characters" as an IETF Best Current
Practice (BCP) specification. Abstract: "There are a number of
circumstances in which an escape mechanism is needed in conjunction
with a protocol to encode characters that cannot be represented or
transmitted directly. With ASCII coding the traditional escape has been
either the decimal or hexadecimal numeric value of the character,
written in a variety of different ways. The move to Unicode, where
characters occupy two or more octets and may be coded in several
different forms, has further complicated the question of escapes. This
document discusses some options now in use and discusses considerations
for selecting one for use in new IETF protocols and protocols that are
now being internationalized." In accordance with existing best-practices
recommendations (RFC 2277), new protocols that are required to carry
textual content for human use SHOULD be designed in such a way that
the full repertoire of Unicode characters may be represented in that
text. This document therefore proposes that existing protocols being
internationalized, and that need an escape mechanism, SHOULD use some
contextually-appropriate variation on references to code points unless
other considerations outweigh those described here. This recommendation
is not applicable to protocols that already accept native UTF-8 or some
other encoding of Unicode. In general, when protocols are
internationalized, it is preferable to accept those forms rather than
using escapes. This recommendation applies to cases, including transition
arrangements, in which that is not practical. This BCP document has been
reviewed in the IETF but is not the product of an IETF Working Group;
the IESG contact person is Chris Newman. The subject of escaping has
been extensively reviewed and debated on relevant IETF mailing lists
and by active participants of the Unicode community. The discussions
were not able to achieve consensus to recommend one specific format,
but rather to recommend two good formats and discourage use of some
problematic formats. There was some debate over how much discussion of
problematic formats was appropriate.
of "ASCII Escaping of Unicode Characters" as an IETF Best Current
Practice (BCP) specification. Abstract: "There are a number of
circumstances in which an escape mechanism is needed in conjunction
with a protocol to encode characters that cannot be represented or
transmitted directly. With ASCII coding the traditional escape has been
either the decimal or hexadecimal numeric value of the character,
written in a variety of different ways. The move to Unicode, where
characters occupy two or more octets and may be coded in several
different forms, has further complicated the question of escapes. This
document discusses some options now in use and discusses considerations
for selecting one for use in new IETF protocols and protocols that are
now being internationalized." In accordance with existing best-practices
recommendations (RFC 2277), new protocols that are required to carry
textual content for human use SHOULD be designed in such a way that
the full repertoire of Unicode characters may be represented in that
text. This document therefore proposes that existing protocols being
internationalized, and that need an escape mechanism, SHOULD use some
contextually-appropriate variation on references to code points unless
other considerations outweigh those described here. This recommendation
is not applicable to protocols that already accept native UTF-8 or some
other encoding of Unicode. In general, when protocols are
internationalized, it is preferable to accept those forms rather than
using escapes. This recommendation applies to cases, including transition
arrangements, in which that is not practical. This BCP document has been
reviewed in the IETF but is not the product of an IETF Working Group;
the IESG contact person is Chris Newman. The subject of escaping has
been extensively reviewed and debated on relevant IETF mailing lists
and by active participants of the Unicode community. The discussions
were not able to achieve consensus to recommend one specific format,
but rather to recommend two good formats and discourage use of some
problematic formats. There was some debate over how much discussion of
problematic formats was appropriate.
Mathematical Markup Language (MathML) Version 3.0
Members of the W3C Math Working Group have released a third Public
Working Draft which specifies a new version of the the Mathematical
Markup Language: MathML 3.0. MathML is an XML application for
describing mathematical notation and capturing both its structure
and content. The goal of MathML is to enable mathematics to be served,
received, and processed on the World Wide Web, just as HTML has enabled
this functionality for text. MathML can be used to encode both
mathematical notation and mathematical content. About thirty-five of
the MathML tags describe abstract notational structures, while another
about one hundred and seventy provide a way of unambiguously specifying
the intended meaning of an expression. Additional chapters discuss how
the MathML content and presentation elements interact, and how MathML
renderers might be implemented and should interact with browsers.
Finally, this document addresses the issue of special characters used
for mathematics, their handling in MathML, their presence in Unicode,
and their relation to fonts. While MathML is human-readable, in all
but the simplest cases, authors use equation editors, conversion
programs, and other specialized software tools to generate MathML.
Several versions of such MathML tools exist, and more, both freely
available software and commercial products, are under development.
Note: The W3C WG has also published "A MathML for CSS Profile"; this
MathML 3.0 profile admits formatting with Cascading Style Sheets.
This will facilitate adoption of MathML in web browsers and CSS
formatters, allowing them to reuse existing CSS visual formatting model,
enhanced with a few mathematics-oriented extensions, for rendering of
the layout schemata of presentational MathML.
Working Draft which specifies a new version of the the Mathematical
Markup Language: MathML 3.0. MathML is an XML application for
describing mathematical notation and capturing both its structure
and content. The goal of MathML is to enable mathematics to be served,
received, and processed on the World Wide Web, just as HTML has enabled
this functionality for text. MathML can be used to encode both
mathematical notation and mathematical content. About thirty-five of
the MathML tags describe abstract notational structures, while another
about one hundred and seventy provide a way of unambiguously specifying
the intended meaning of an expression. Additional chapters discuss how
the MathML content and presentation elements interact, and how MathML
renderers might be implemented and should interact with browsers.
Finally, this document addresses the issue of special characters used
for mathematics, their handling in MathML, their presence in Unicode,
and their relation to fonts. While MathML is human-readable, in all
but the simplest cases, authors use equation editors, conversion
programs, and other specialized software tools to generate MathML.
Several versions of such MathML tools exist, and more, both freely
available software and commercial products, are under development.
Note: The W3C WG has also published "A MathML for CSS Profile"; this
MathML 3.0 profile admits formatting with Cascading Style Sheets.
This will facilitate adoption of MathML in web browsers and CSS
formatters, allowing them to reuse existing CSS visual formatting model,
enhanced with a few mathematics-oriented extensions, for rendering of
the layout schemata of presentational MathML.
XML Entity Definitions for Characters
W3C announced the release of a First Public Working Draft for the
specification "XML Entity Definitions for Characters." The document has
been produced by members of the W3C Math Working Group as part of the
W3C Math Activity; it is one of three drafts relevant to MathML published
on 2007-12-14. The document defines several sets of names which are
assigned to Unicode characters; these names may be used for entity
references in SGML/XML-based markup languages. Notation and symbols
have proved very important for scientific documents, especially in
mathematics. In the majority of cases it is preferable to store
characters directly as Unicode character data or as XML numeric character
references. However, in some environments it is more convenient to use
the ASCII input mechanism provided by XML entity references. Many entity
names are in common use, and this specification aims to provide standard
mappings to Unicode for each of these names. In the Working Draft, two
tables listing the combined sets are presented, first in Unicode order
and then in alphabetic order; then tables documenting each of the entity
sets are provided. Each set has a link to the DTD entity declaration
for the corresponding entity set, and also a link to an XSLT2 stylesheet
that will implement a reverse mapping from characters to entity names.
In addition to the stylesheets and entity files corresponding to each
individual entity set, a combined stylesheet is provided, as well as
two combined sets of DTD entity declarations. The first is a small file
which includes all the other entity files via parameter entity references;
the second is a larger file that directly contains a definition of each
entity, with all duplicates removed.
Example (sets) include: [1] C0 Controls and Basic Latin, C1 Controls and
Latin-1 Supplement; [2] Latin Extended-A, Latin Extended-B; [3] IPA
Extensions, Spacing Modifier Letters; [4] Combining Diacritical Marks,
Greek and Coptic; [5] Cyrillic; [6] General Punctuation, Superscripts
and Subscripts, Currency Symbols, Combining Diacritical Marks for
Symbols; [7] Letterlike Symbols, Number Forms, Arrows... The editor notes:
It is hoped that the entity sets defined by this specification may form
the basis of an update to "ISO 9573-13-1991". However, pressure of other
commitments has currently prevented this document being processed by
the relevant ISO committee, thus the entity sets are being presented with
Formal Public identifiers of the form "-//W3C//..." rather than "ISO...."
It is hoped that an update to TR 9573-13 may be made later. The present
version of TR 9573-13 defines the sets of names, but does not give
mappings to Unicode. TR 9573-13 is maintained by ISO/IEC JTC 1/SC 34/WG 1
(Markup Languages). An Outgoing Liaison Statement from SC34 was recently
communicated to the W3C MathML WG regarding cancellation of the project
for TR 9573-13, Second Edition [Revision of TR 9573-13, SGML support
facilities -- Techniques for using SGML - Part 13: Public entity sets for
SGML for mathematics and science], in accordance with Resolution 13
adopted at the SC 34 plenary meeting held in Kyoto, Japan, 2007-12-08/11.
More Information See also the source files: Click Here
specification "XML Entity Definitions for Characters." The document has
been produced by members of the W3C Math Working Group as part of the
W3C Math Activity; it is one of three drafts relevant to MathML published
on 2007-12-14. The document defines several sets of names which are
assigned to Unicode characters; these names may be used for entity
references in SGML/XML-based markup languages. Notation and symbols
have proved very important for scientific documents, especially in
mathematics. In the majority of cases it is preferable to store
characters directly as Unicode character data or as XML numeric character
references. However, in some environments it is more convenient to use
the ASCII input mechanism provided by XML entity references. Many entity
names are in common use, and this specification aims to provide standard
mappings to Unicode for each of these names. In the Working Draft, two
tables listing the combined sets are presented, first in Unicode order
and then in alphabetic order; then tables documenting each of the entity
sets are provided. Each set has a link to the DTD entity declaration
for the corresponding entity set, and also a link to an XSLT2 stylesheet
that will implement a reverse mapping from characters to entity names.
In addition to the stylesheets and entity files corresponding to each
individual entity set, a combined stylesheet is provided, as well as
two combined sets of DTD entity declarations. The first is a small file
which includes all the other entity files via parameter entity references;
the second is a larger file that directly contains a definition of each
entity, with all duplicates removed.
Example (sets) include: [1] C0 Controls and Basic Latin, C1 Controls and
Latin-1 Supplement; [2] Latin Extended-A, Latin Extended-B; [3] IPA
Extensions, Spacing Modifier Letters; [4] Combining Diacritical Marks,
Greek and Coptic; [5] Cyrillic; [6] General Punctuation, Superscripts
and Subscripts, Currency Symbols, Combining Diacritical Marks for
Symbols; [7] Letterlike Symbols, Number Forms, Arrows... The editor notes:
It is hoped that the entity sets defined by this specification may form
the basis of an update to "ISO 9573-13-1991". However, pressure of other
commitments has currently prevented this document being processed by
the relevant ISO committee, thus the entity sets are being presented with
Formal Public identifiers of the form "-//W3C//..." rather than "ISO...."
It is hoped that an update to TR 9573-13 may be made later. The present
version of TR 9573-13 defines the sets of names, but does not give
mappings to Unicode. TR 9573-13 is maintained by ISO/IEC JTC 1/SC 34/WG 1
(Markup Languages). An Outgoing Liaison Statement from SC34 was recently
communicated to the W3C MathML WG regarding cancellation of the project
for TR 9573-13, Second Edition [Revision of TR 9573-13, SGML support
facilities -- Techniques for using SGML - Part 13: Public entity sets for
SGML for mathematics and science], in accordance with Resolution 13
adopted at the SC 34 plenary meeting held in Kyoto, Japan, 2007-12-08/11.
More Information See also the source files: Click Here
XForms and Ruby on Rails at the Doctor's Office, Part 1
This is the first article in a four-part series about using XForms,
IBM DB2 pureXML, and Ruby together to more easily create Web applications.
We examine how XForms, DB2 pureXML, and Ruby on Rails can help you more
rapidly build XML-centric Web applications. We examine how XForms
simplifies creating an interactive front end. You will get the
interactivity of Ajax, but without having to write any JavaScript or
mapping code. We look at how easy it is to store and query XML using
DB2 pureXML: DB2's SQL/XML will let you mix SQL and XQuery together
to easily access XML data in your database. Finally, we look at how
to set up Ruby on Rails to work with DB2 pureXML. With just a few minor
adjustments, we were able to create XML-enabled tables in DB2 using
Ruby on Rails. XForms allows you to define your data in a simple XML
model and your view using standard HTML form elements. XForms then
provides declarative mapping between these elements. That means you
will not have to write either client-side or server-side code for
taking some submitted value and inserting into an XML structure. XForms
handles it for you. It even does all of this asynchronously: changes
in the HTML form are bound to the XML model and sent to the server
for synchronization. You get the benefits of Ajax without having to
write any JavaScript.
IBM DB2 pureXML, and Ruby together to more easily create Web applications.
We examine how XForms, DB2 pureXML, and Ruby on Rails can help you more
rapidly build XML-centric Web applications. We examine how XForms
simplifies creating an interactive front end. You will get the
interactivity of Ajax, but without having to write any JavaScript or
mapping code. We look at how easy it is to store and query XML using
DB2 pureXML: DB2's SQL/XML will let you mix SQL and XQuery together
to easily access XML data in your database. Finally, we look at how
to set up Ruby on Rails to work with DB2 pureXML. With just a few minor
adjustments, we were able to create XML-enabled tables in DB2 using
Ruby on Rails. XForms allows you to define your data in a simple XML
model and your view using standard HTML form elements. XForms then
provides declarative mapping between these elements. That means you
will not have to write either client-side or server-side code for
taking some submitted value and inserting into an XML structure. XForms
handles it for you. It even does all of this asynchronously: changes
in the HTML form are bound to the XML model and sent to the server
for synchronization. You get the benefits of Ajax without having to
write any JavaScript.
Orbeon Forms 3.6 Final Release
Developers have announced the final release of Orbeon Forms 3.6. Orbeon
Forms is an open source forms solution that handles the complexity of
forms typical of the enterprise or government. It is delivered to
standard web browsers (including Internet Explorer, Firefox, Safari and
Opera) thanks to XForms and Ajax technology, with no need for client-side
software or plugins. Orbeon Forms allows you to build fully interactive
forms with features that include as-you-type validation, optional and
repeated sections, always up-to-date error summaries, PDF output, full
internationalization, and controls like auto-completion, tabs, dialogs,
trees and menus. Orbeon Forms 3.6 features over 170 improvements since
Orbeon Forms 3.5.1, including major improvements in the areas of state
handling, XML Schema validation, error handling, deployment within Java
applications, and performance. In previous versions, XML Schema
validation always followed a strict mode where all instances had to
be strictly valid as per imported schema definitions. In particular,
this meant that if you imported a schema, the top-level element of an
instance had to have a valid schema definition or the instance would
be entirely invalid. In version 3.6, Orbeon Forms implements a "lax"
validation mode by default, where only elements that have definitions
in the imported schemas are validated. Other elements are not considered
for validation. This is in line with XML Schema and XSLT 2.0 lax
validation modes. Founded in 1999, Orbeon is headquartered in Silicon
Valley and maintains a field office in Switzerland.
Forms is an open source forms solution that handles the complexity of
forms typical of the enterprise or government. It is delivered to
standard web browsers (including Internet Explorer, Firefox, Safari and
Opera) thanks to XForms and Ajax technology, with no need for client-side
software or plugins. Orbeon Forms allows you to build fully interactive
forms with features that include as-you-type validation, optional and
repeated sections, always up-to-date error summaries, PDF output, full
internationalization, and controls like auto-completion, tabs, dialogs,
trees and menus. Orbeon Forms 3.6 features over 170 improvements since
Orbeon Forms 3.5.1, including major improvements in the areas of state
handling, XML Schema validation, error handling, deployment within Java
applications, and performance. In previous versions, XML Schema
validation always followed a strict mode where all instances had to
be strictly valid as per imported schema definitions. In particular,
this meant that if you imported a schema, the top-level element of an
instance had to have a valid schema definition or the instance would
be entirely invalid. In version 3.6, Orbeon Forms implements a "lax"
validation mode by default, where only elements that have definitions
in the imported schemas are validated. Other elements are not considered
for validation. This is in line with XML Schema and XSLT 2.0 lax
validation modes. Founded in 1999, Orbeon is headquartered in Silicon
Valley and maintains a field office in Switzerland.
XForms: Who Needs Killer Apps?
The XML 2007 Conference has come and gone, with as usual a number of
thought provoking talks and controversies. During the evening of the
first day, there was a special XForms Evening, with a number of the
industry gurus in that space providing very good examples of why XForms
is a compelling technology and here to stay... When you stop and think
about it, you might begin to realize how very unusual XForms is in that
regard. It's an application layer that transcends the implementation
it is written in. It doesn't matter whether I'm writing an XForms
component in C++ or Java or XUL or JavaScript -- what is important is
that I can send run the same 'applications' on any system, that the
ecosystem is fitting XForms in where it can, despite the very best
efforts of certain vendors to kill it... Pundits will continue declaring
its imminent demise, year after year, and yet, year after year, it'll
end up on more servers, more desktops, more browsers and mobile devices.
Thus, my anticipation is that the number of XForms specialists will
remain comparatively small for some time to come, but they will be
educating others, who will quietly be incorporating XForms as a way of
life into their applications. Some (many) of those will come from the
AJAX community, both as AJAX implementations of XForms continue to
proliferate and as many who work at the intersection of AJAX and XML
understand that while they CAN continue to rebuild the wheel with every
app, they can get a lot farther with XForms as part of their toolkit...
I think that you need to make a distinction here between 'the industry'
and a few companies such as Microsoft or Adobe. There are actually a
number of vendors in this space that are doing quite well thank you,
especially as interest in large XML vocabularies such as XBRL, HL7 and
other vertical efforts continue to rise. IBM's Workplace forms
incorporates XForms, as Sun had done with OpenOffice, Firefox has had
XForms support ongoing for nearly two years, and products such as Orbeon,
Formsplayer and Picoforms have continued to gain adherents. XForms
support in desktop browsers is moving slowly, unfortunately, a space
where more innovation needs to happen, but at the same time support
DOES exist in one form or another, even if such support is not always
native. On the flip-side, part of the change is also coming from the
XForms working group, as they realize that while it is POSSIBLE to
create a stand-alone application layer in XML, its not necessarily
desirable to keep everything constrained to that one layer.
thought provoking talks and controversies. During the evening of the
first day, there was a special XForms Evening, with a number of the
industry gurus in that space providing very good examples of why XForms
is a compelling technology and here to stay... When you stop and think
about it, you might begin to realize how very unusual XForms is in that
regard. It's an application layer that transcends the implementation
it is written in. It doesn't matter whether I'm writing an XForms
component in C++ or Java or XUL or JavaScript -- what is important is
that I can send run the same 'applications' on any system, that the
ecosystem is fitting XForms in where it can, despite the very best
efforts of certain vendors to kill it... Pundits will continue declaring
its imminent demise, year after year, and yet, year after year, it'll
end up on more servers, more desktops, more browsers and mobile devices.
Thus, my anticipation is that the number of XForms specialists will
remain comparatively small for some time to come, but they will be
educating others, who will quietly be incorporating XForms as a way of
life into their applications. Some (many) of those will come from the
AJAX community, both as AJAX implementations of XForms continue to
proliferate and as many who work at the intersection of AJAX and XML
understand that while they CAN continue to rebuild the wheel with every
app, they can get a lot farther with XForms as part of their toolkit...
I think that you need to make a distinction here between 'the industry'
and a few companies such as Microsoft or Adobe. There are actually a
number of vendors in this space that are doing quite well thank you,
especially as interest in large XML vocabularies such as XBRL, HL7 and
other vertical efforts continue to rise. IBM's Workplace forms
incorporates XForms, as Sun had done with OpenOffice, Firefox has had
XForms support ongoing for nearly two years, and products such as Orbeon,
Formsplayer and Picoforms have continued to gain adherents. XForms
support in desktop browsers is moving slowly, unfortunately, a space
where more innovation needs to happen, but at the same time support
DOES exist in one form or another, even if such support is not always
native. On the flip-side, part of the change is also coming from the
XForms working group, as they realize that while it is POSSIBLE to
create a stand-alone application layer in XML, its not necessarily
desirable to keep everything constrained to that one layer.
SugarCRM Offers Biggest Upgrade Yet
SugarCRM has released the 5.0 version of its open-source customer
relationship management software following a long period of development
and testing. Sugar 5.0 features improvements in three main areas:
A new on-demand architecture designed to improve security, tools that
let nontechnical users build custom modules, and an AJAX e-mail client
that is compatible with any server that supports the POP3 protocol. The
release also delivers upgraded dashboarding capabilities. The software
went through three beta cycles and was tested more than 30,000 times
by members of SugarCRM's open-source community, said Chris Harrick,
senior director of product marketing for the Cupertino, California,
company. Harrick said the open-source development model allows software
to be vetted far more thoroughly than an in-house quality testing team
can. In a space crowded by seemingly similar CRM offerings, SugarCRM
has tried to differentiate itself partly through fostering a user-friendly
image, according to Martens. The company's attitude, according to China
Martens (an analyst with the 451 Group) is: "Forget about the technical
guys, we're Sugar and you can configure us. We're friendly."
relationship management software following a long period of development
and testing. Sugar 5.0 features improvements in three main areas:
A new on-demand architecture designed to improve security, tools that
let nontechnical users build custom modules, and an AJAX e-mail client
that is compatible with any server that supports the POP3 protocol. The
release also delivers upgraded dashboarding capabilities. The software
went through three beta cycles and was tested more than 30,000 times
by members of SugarCRM's open-source community, said Chris Harrick,
senior director of product marketing for the Cupertino, California,
company. Harrick said the open-source development model allows software
to be vetted far more thoroughly than an in-house quality testing team
can. In a space crowded by seemingly similar CRM offerings, SugarCRM
has tried to differentiate itself partly through fostering a user-friendly
image, according to Martens. The company's attitude, according to China
Martens (an analyst with the 451 Group) is: "Forget about the technical
guys, we're Sugar and you can configure us. We're friendly."
FIQL: The Feed Item Query Language
An initial public draft of "FIQL: The Feed Item Query Language" has
been released. The Feed Item Query Language (FIQL, pronounced "fickle")
is a simple but flexible, URI-friendly syntax for expressing filters
across the entries in a syndicated feed. For example, a query
"title==foo*;(updated=lt=-P1D,title==*bar)" would return all entries
in a feed that meet the following criteria; (1) have a title beginning
with "foo", AND (2) have been updated in the last day OR have a title
ending with "bar". The specification defines an extensible syntax for
FIQL queries, explains their use in HTTP, and defines feed extensions
for discovering and describing query interfaces. On the Atom list,
the author responded to a question "Why not XPath or XQuery or SPARQL
(with an Atom/RDF mapping), or CSS selectors or some subset of one of
those?" In a nutshell, there are two reasons; [i] Those query languages
are optimised for data models that aren't feeds; respectively, XML
Infosets, Infosets again, RDF graphs and CSS cascades. While it's
possible to contort them to fit feeds, they don't really lend themselves
to it. XQuery and SPARQL also present a fairly high barrier to adoption
(if you're not a big XML vendor or a SW-head, respectively ;) Contorting
them so that they're easy to fit into a URL isn't too attractive,
either. [ii] When you expose a query interface, you're allowing people
to consume compute power on your servers. An arbitrary query language
allows arbitrary queries, which is unacceptable when you're working
across administrative domains. FIQL gives you tools to constrain how
queries are shaped. I've been asked this many times, and should probably
add it as a FAQ in an appendix. Certainly there are use cases for using
XQuery, etc. against feeds, but it's also become apparent that there's
a place for something simple, reasonably flexible, and Web-friendly. More Information See also Atom references: Click Here
been released. The Feed Item Query Language (FIQL, pronounced "fickle")
is a simple but flexible, URI-friendly syntax for expressing filters
across the entries in a syndicated feed. For example, a query
"title==foo*;(updated=lt=-P1D,title==*bar)" would return all entries
in a feed that meet the following criteria; (1) have a title beginning
with "foo", AND (2) have been updated in the last day OR have a title
ending with "bar". The specification defines an extensible syntax for
FIQL queries, explains their use in HTTP, and defines feed extensions
for discovering and describing query interfaces. On the Atom list,
the author responded to a question "Why not XPath or XQuery or SPARQL
(with an Atom/RDF mapping), or CSS selectors or some subset of one of
those?" In a nutshell, there are two reasons; [i] Those query languages
are optimised for data models that aren't feeds; respectively, XML
Infosets, Infosets again, RDF graphs and CSS cascades. While it's
possible to contort them to fit feeds, they don't really lend themselves
to it. XQuery and SPARQL also present a fairly high barrier to adoption
(if you're not a big XML vendor or a SW-head, respectively ;) Contorting
them so that they're easy to fit into a URL isn't too attractive,
either. [ii] When you expose a query interface, you're allowing people
to consume compute power on your servers. An arbitrary query language
allows arbitrary queries, which is unacceptable when you're working
across administrative domains. FIQL gives you tools to constrain how
queries are shaped. I've been asked this many times, and should probably
add it as a FAQ in an appendix. Certainly there are use cases for using
XQuery, etc. against feeds, but it's also become apparent that there's
a place for something simple, reasonably flexible, and Web-friendly. More Information See also Atom references: Click Here
Liberty Alliance Publishes SAML 2.0 Interoperability Testing Matrix
Liberty Alliance announced that products from Hewlett-Packard, IBM,
RSA (The Security Division of EMC), Sun Microsystems, and Symlabs, Inc.
have passed Liberty Alliance testing for SAML 2.0 interoperability.
The Security Assertion Markup Language (SAML) Specification Version
2.0 was approved as an OASIS Standard in March 2005. Products and
services passing SAML 2.0 interoperability testing included:
Hewlett-Packard's HP Select Federation 7.0; IBM's Tivoli Federated
Identity Manager, version 6.2; RSA's Federated Identity Manager 4.0;
Sun Microsystems' Java System Federated Access Manager 8.0; Symlabs
Inc's Federated Identity Suite version 3.3.0. The vendors participated
in the November 2007 Liberty Interoperable event administered by the
Drummond Group Inc. and are the first to pass full-matrix testing
Liberty Alliance incorporated into its interoperability program this
year. All of these vendors also passed Liberty Alliance testing against
the US GSA SAML 2.0 profile, meeting the prerequisite interoperability
requirements for participating in the US E-Authentication Identity
Federation. Liberty Alliance continually enhances the Liberty
Interoperable program to meet cross-industry demands for proven
interoperable identity solutions. The November event was the first to
conduct Internet-based and full-matrix testing. Internet-based testing
allows vendors to participate in the same interoperability event from
anywhere in the world. Full-matrix testing requires each vendor to
test with every other participant to ensure testing mirrors real word
identity federation interoperability requirements. The breadth and
depth of these testing procedures provides deploying organizations
with assurances that products have proven to interoperate with each
other across the widest possible range of deployment scenarios.
RSA (The Security Division of EMC), Sun Microsystems, and Symlabs, Inc.
have passed Liberty Alliance testing for SAML 2.0 interoperability.
The Security Assertion Markup Language (SAML) Specification Version
2.0 was approved as an OASIS Standard in March 2005. Products and
services passing SAML 2.0 interoperability testing included:
Hewlett-Packard's HP Select Federation 7.0; IBM's Tivoli Federated
Identity Manager, version 6.2; RSA's Federated Identity Manager 4.0;
Sun Microsystems' Java System Federated Access Manager 8.0; Symlabs
Inc's Federated Identity Suite version 3.3.0. The vendors participated
in the November 2007 Liberty Interoperable event administered by the
Drummond Group Inc. and are the first to pass full-matrix testing
Liberty Alliance incorporated into its interoperability program this
year. All of these vendors also passed Liberty Alliance testing against
the US GSA SAML 2.0 profile, meeting the prerequisite interoperability
requirements for participating in the US E-Authentication Identity
Federation. Liberty Alliance continually enhances the Liberty
Interoperable program to meet cross-industry demands for proven
interoperable identity solutions. The November event was the first to
conduct Internet-based and full-matrix testing. Internet-based testing
allows vendors to participate in the same interoperability event from
anywhere in the world. Full-matrix testing requires each vendor to
test with every other participant to ensure testing mirrors real word
identity federation interoperability requirements. The breadth and
depth of these testing procedures provides deploying organizations
with assurances that products have proven to interoperate with each
other across the widest possible range of deployment scenarios.
Digital Libraries Are Taking Form
Large-scale digital libraries and book digitization projects are poised
to go beyond prototypes into the mass market. "All the published
literature of humankind in the next generation will be in digital form,"
says Brewster Kahle, cofounder of the Internet Archive and one of the
driving forces behind the nonprofit Open Content Alliance (OCA) an open
digitization consortium. "And all the older materials that will be used
by younger people (except for a very few) will be online. So, if we want
something to be used by the next generation, it has to be online. That's
an understood premise. It's now also understood that it's not that
expensive to get there." Librarians tackling the new digitization
projects contend with complex technological issues. Notable among them
is creating metadata schemas that work across multiple technologies and
organizations. How best to provide multilingual services is another issue.
However, the issue of who will control the digitization process, and its
concomitant economic and access ramifications, is far more convoluted...
Interoperability poses several difficulties. Digitization is available
in several common formats for text-heavy books. Developing metadata for
such books is therefore easier than it is for multimedia materials spread
across multiple institutions. Metadata compatibility will likely present
the greatest challenges and the greatest opportunity for developers in
this market. The European Digital Library (EDL) will most likely opt for
a metadata scheme based on the Dublin Core standard. Presumably, as the
EDL work progresses, mapping technologies will evolve to support semantic
queries. This, in turn, will enable application-level interoperation
without the need for separate, complex, and expensive application-level
interoperability profiles.
to go beyond prototypes into the mass market. "All the published
literature of humankind in the next generation will be in digital form,"
says Brewster Kahle, cofounder of the Internet Archive and one of the
driving forces behind the nonprofit Open Content Alliance (OCA) an open
digitization consortium. "And all the older materials that will be used
by younger people (except for a very few) will be online. So, if we want
something to be used by the next generation, it has to be online. That's
an understood premise. It's now also understood that it's not that
expensive to get there." Librarians tackling the new digitization
projects contend with complex technological issues. Notable among them
is creating metadata schemas that work across multiple technologies and
organizations. How best to provide multilingual services is another issue.
However, the issue of who will control the digitization process, and its
concomitant economic and access ramifications, is far more convoluted...
Interoperability poses several difficulties. Digitization is available
in several common formats for text-heavy books. Developing metadata for
such books is therefore easier than it is for multimedia materials spread
across multiple institutions. Metadata compatibility will likely present
the greatest challenges and the greatest opportunity for developers in
this market. The European Digital Library (EDL) will most likely opt for
a metadata scheme based on the Dublin Core standard. Presumably, as the
EDL work progresses, mapping technologies will evolve to support semantic
queries. This, in turn, will enable application-level interoperation
without the need for separate, complex, and expensive application-level
interoperability profiles.
IBM Partners With ACI on SOA-Based Payments System
IBM announced that it has partnered with ACI Worldwide in building
electronic payment systems that are based on a service-oriented
architecture to make it easier to share payment information across
banking applications. The alliance is focused primarily on the financial
services industry, targeting banks that are trying to manage old
payments systems running on legacy platforms that are difficult to
integrate with newer systems and are expensive to maintain, IBM said.
ACI and IBM plan to offer an SOA approach for integration. SOA uses
technology based on extensible markup language, or XML, to loosely
couple systems for passing data between them. Phase one of the partnership
is expected to yield an optimized version of BASE24-eps on System z to
acquire, route, and authorize payments online; a wholesale payments
system to help European companies meet pending Single Euro Payments Area
regulations; and a real-time fraud detection system. Subsequent systems
will focus on dispute management, smart card management, online banking,
and trade finance. Under the deal, ACI will tailor its money transfer
system and BASE24-eps application to run on IBM's System z mainframe
hardware. The companies plan to form joint sales and technical teams
for selling the combined technologies, and for helping companies migrate
legacy systems to the new products.
electronic payment systems that are based on a service-oriented
architecture to make it easier to share payment information across
banking applications. The alliance is focused primarily on the financial
services industry, targeting banks that are trying to manage old
payments systems running on legacy platforms that are difficult to
integrate with newer systems and are expensive to maintain, IBM said.
ACI and IBM plan to offer an SOA approach for integration. SOA uses
technology based on extensible markup language, or XML, to loosely
couple systems for passing data between them. Phase one of the partnership
is expected to yield an optimized version of BASE24-eps on System z to
acquire, route, and authorize payments online; a wholesale payments
system to help European companies meet pending Single Euro Payments Area
regulations; and a real-time fraud detection system. Subsequent systems
will focus on dispute management, smart card management, online banking,
and trade finance. Under the deal, ACI will tailor its money transfer
system and BASE24-eps application to run on IBM's System z mainframe
hardware. The companies plan to form joint sales and technical teams
for selling the combined technologies, and for helping companies migrate
legacy systems to the new products.
Ruby on Rails 2.0 Users Give Thumbs Up
With Ruby on Rails 2.0 just a week old, developers already are weighing
in with what they like or dislike about the new release. Ruby on Rails
creator David Heinemeier Hansson announced the release of Ruby on Rails
2.0 on December 7, 2007 to a developer base set on seeing the next big
thing regarding the popular Web development framework. Chief among the
changes in Rails 2.0 are enhanced security and support for REST
(Representational State Transfer). Steven Beales, chief software
architect at Medical Decision Logic said Mdlogix has been using the
EdgeRails releases of Rails and had already incorporated many of the
Rails 2.0 features into its Rails-based solutions. Mdlogix develops a
clinical research management system based on Rails. Beales said the
most useful features of Rails 2.0 for Mdlogix have been Partial Layouts,
which reduce CSS (Cascading Style Sheets)/html duplication by allowing
parts of pages to use common layouts, RESTful Routing Updates, which
allow "prettier" URLs for custom actions, Asset Caching, which provides
new tags for compressing JavaScript easily, Initializers, which separate
out custom configuration into separate initializer files, and Fixtures,
which provide support for using fixture names in other fixture files
to relate fixtures. Simply put, Beales said RoR (Ruby on Rails) is the
most productive tool Mdlogix has for developing simple-looking Web
applications with advanced functionality.
in with what they like or dislike about the new release. Ruby on Rails
creator David Heinemeier Hansson announced the release of Ruby on Rails
2.0 on December 7, 2007 to a developer base set on seeing the next big
thing regarding the popular Web development framework. Chief among the
changes in Rails 2.0 are enhanced security and support for REST
(Representational State Transfer). Steven Beales, chief software
architect at Medical Decision Logic said Mdlogix has been using the
EdgeRails releases of Rails and had already incorporated many of the
Rails 2.0 features into its Rails-based solutions. Mdlogix develops a
clinical research management system based on Rails. Beales said the
most useful features of Rails 2.0 for Mdlogix have been Partial Layouts,
which reduce CSS (Cascading Style Sheets)/html duplication by allowing
parts of pages to use common layouts, RESTful Routing Updates, which
allow "prettier" URLs for custom actions, Asset Caching, which provides
new tags for compressing JavaScript easily, Initializers, which separate
out custom configuration into separate initializer files, and Fixtures,
which provide support for using fixture names in other fixture files
to relate fixtures. Simply put, Beales said RoR (Ruby on Rails) is the
most productive tool Mdlogix has for developing simple-looking Web
applications with advanced functionality.
Video Requirements for Web-based Virtual Environments Using Extensible
This presentation from members of the Web3D Consortium was given at the
"W3C Video on the Web Workshop", held 12-13 December 2007, in San Jose,
California, USA and Brussels, Belgium. Real-time interactive 3D graphics
and virtual environments typically include a variety of multimedia
capabilities, including video. The Extensible 3D (X3D) Graphics is an
ISO standard produced by the Web3D Consortium that defines 3D scenes
using a scene-graph approach. Multiple X3D file formats and language
encodings are available, with a primary emphasis on XML for maximum
interoperability with the Web architecture. A large number of functional
capabilities are needed and projected for the use of video together
with Web-based virtual environments. This paper examines numerous
functional requirements for the integrated use of Web-compatible video
with 3D. Three areas of interest are identified: video usage within X3D
scenes, linking video external to X3D scenes, and generation of 3D
geometry from video. Extensible 3D (X3D) is a Web-based standard for
3D graphics, enabling real-time communication using animation, user
interaction and networking. The point paper lists current and expected
requirements, primarily divisible into usage of video within X3D graphics
scenes, linkage to video in web-based applications external to X3D
graphics scenes, and generation of 3D geometric content from spatially
annotated video inputs. Royalty-free video capabilities are critical
important to achieve essential requirements for interoperability and
performance. Standards-based X3D requirements also appear to be
representative of the needs presented by alternative proprietary
multiuser virtual environments. X3D capabilities are proposed,
implemented, evaluated and approved by members of the nonprofit Web3D
Consortium. X3D is an open, royalty-free standard that is rigorously
defined, published online, and ratified by the International Organization
for Standards (ISO). Multiple commercial and open-source implementations
are available.
"W3C Video on the Web Workshop", held 12-13 December 2007, in San Jose,
California, USA and Brussels, Belgium. Real-time interactive 3D graphics
and virtual environments typically include a variety of multimedia
capabilities, including video. The Extensible 3D (X3D) Graphics is an
ISO standard produced by the Web3D Consortium that defines 3D scenes
using a scene-graph approach. Multiple X3D file formats and language
encodings are available, with a primary emphasis on XML for maximum
interoperability with the Web architecture. A large number of functional
capabilities are needed and projected for the use of video together
with Web-based virtual environments. This paper examines numerous
functional requirements for the integrated use of Web-compatible video
with 3D. Three areas of interest are identified: video usage within X3D
scenes, linking video external to X3D scenes, and generation of 3D
geometry from video. Extensible 3D (X3D) is a Web-based standard for
3D graphics, enabling real-time communication using animation, user
interaction and networking. The point paper lists current and expected
requirements, primarily divisible into usage of video within X3D graphics
scenes, linkage to video in web-based applications external to X3D
graphics scenes, and generation of 3D geometric content from spatially
annotated video inputs. Royalty-free video capabilities are critical
important to achieve essential requirements for interoperability and
performance. Standards-based X3D requirements also appear to be
representative of the needs presented by alternative proprietary
multiuser virtual environments. X3D capabilities are proposed,
implemented, evaluated and approved by members of the nonprofit Web3D
Consortium. X3D is an open, royalty-free standard that is rigorously
defined, published online, and ratified by the International Organization
for Standards (ISO). Multiple commercial and open-source implementations
are available.
The Open-ness of the Open Source Vulnerability Database
There are a lot of open source initiatives out there that aren't just
software, but ways to get information into people's hands. Today an
open source supplier of security vulnerability information, the OSVDB,
just went live with a whole new revision to its service. According to
the web site description, OSVDB is "an independent and open source
database created by and for the security community. The goal of the
project is to provide accurate, detailed, current, and unbiased
technical information on security vulnerabilities. The project will
promote greater, more open collaboration between companies and
individuals, eliminate redundant works, and reduce expenses inherent
with the development and maintenance of in-house vulnerability databases.
[Where] Common Vulnerabilities and Exposures (CVE) provides a
standardized name for vulnerabilities, much like a dictionary, OSVDB
is database that provides a wealth of information about each
vulnerability. Where appropriate, entries in the OSVDB reference their
respective CVE names." The basic idea's pretty elegant: Take all the
ethically disclosed software security information you can find and make
it available in as detailed and up-to-date format as you can without
the interests of any particular software vendor. The results can and
have been integrated with a number of third-party security products
such as Nikto -- itself an open source product. [Note: OSVDB supports
three database types for XML importation: PostgreSQL, MySQL, and
Microsoft Access. The database may also be accessed through the XML
export file directly. The XML export was designed such that all database
integrity is stored within the structure of the XML file. By this means
anyone can keep a local copy of the current OSVDB snapshot, even in
the absence of a local database such as PostgreSQL. Another feature
of the chosen formatting is the ease in which this XML export can be
integrated into products using tools such as XPath to pull all the
information about a specific vulnerability straight from the XML file.]
software, but ways to get information into people's hands. Today an
open source supplier of security vulnerability information, the OSVDB,
just went live with a whole new revision to its service. According to
the web site description, OSVDB is "an independent and open source
database created by and for the security community. The goal of the
project is to provide accurate, detailed, current, and unbiased
technical information on security vulnerabilities. The project will
promote greater, more open collaboration between companies and
individuals, eliminate redundant works, and reduce expenses inherent
with the development and maintenance of in-house vulnerability databases.
[Where] Common Vulnerabilities and Exposures (CVE) provides a
standardized name for vulnerabilities, much like a dictionary, OSVDB
is database that provides a wealth of information about each
vulnerability. Where appropriate, entries in the OSVDB reference their
respective CVE names." The basic idea's pretty elegant: Take all the
ethically disclosed software security information you can find and make
it available in as detailed and up-to-date format as you can without
the interests of any particular software vendor. The results can and
have been integrated with a number of third-party security products
such as Nikto -- itself an open source product. [Note: OSVDB supports
three database types for XML importation: PostgreSQL, MySQL, and
Microsoft Access. The database may also be accessed through the XML
export file directly. The XML export was designed such that all database
integrity is stored within the structure of the XML file. By this means
anyone can keep a local copy of the current OSVDB snapshot, even in
the absence of a local database such as PostgreSQL. Another feature
of the chosen formatting is the ease in which this XML export can be
integrated into products using tools such as XPath to pull all the
information about a specific vulnerability straight from the XML file.]
W3C First Public Draft: Cool URIs for the Semantic Web
W3C announced that the Semantic Web Education and Outreach Interest Group
has released a first Working Draft for "Cool URIs for the Semantic Web."
Comments on this draft are requested by 21-January-2008. The document
explains the effective use of URIs to enable the growth of the Semantic
Web. URIs (Uniform Resource Identifiers) more simply called "Web
addresses" are at the heart of the Web and also of the Semantic Web.
It gives pointers to several Web sites that use these solutions, and
briefly discusses why several other proposals have problems. Web
documents have always been addressed with URIs (in common parlance often
referred as Uniform Resource Locators, URLs). This is useful because it
means we can easily make RDF statements about Web pages, but also
dangerous because we can easily mix up Web pages and the things, or
resources, described on the page. So the question is, what URIs should
we use in RDF? To identify the frontpage of the Web site of Example Inc.,
we may use 'http://www.example.com/'. But what URI identifies the company
as an organisation, not a Web site? Do we have to serve any content
(HTML pages, RDF files) at those URIs? In this document we will answer
these questions according to relevant specifications. We explain how to
use URIs for things that are not Web pages, such as people, products,
places, ideas and concepts such as ontology classes. We give detailed
examples how the Semantic Web can (and should) be realised as a part of
the Web. The draft document is a practical guide for implementers of the
RDF specification. It explains two approaches for RDF data hosted on
HTTP servers (called 303 URIs and hash URIs). Intended audiences are
Web and ontology developers who have to decide how to model their RDF
URIs for use with HTTP. Applications using non-HTTP URIs are not covered.
This document is an informative guide covering selected aspects of
previously published, detailed technical specifications.
has released a first Working Draft for "Cool URIs for the Semantic Web."
Comments on this draft are requested by 21-January-2008. The document
explains the effective use of URIs to enable the growth of the Semantic
Web. URIs (Uniform Resource Identifiers) more simply called "Web
addresses" are at the heart of the Web and also of the Semantic Web.
It gives pointers to several Web sites that use these solutions, and
briefly discusses why several other proposals have problems. Web
documents have always been addressed with URIs (in common parlance often
referred as Uniform Resource Locators, URLs). This is useful because it
means we can easily make RDF statements about Web pages, but also
dangerous because we can easily mix up Web pages and the things, or
resources, described on the page. So the question is, what URIs should
we use in RDF? To identify the frontpage of the Web site of Example Inc.,
we may use 'http://www.example.com/'. But what URI identifies the company
as an organisation, not a Web site? Do we have to serve any content
(HTML pages, RDF files) at those URIs? In this document we will answer
these questions according to relevant specifications. We explain how to
use URIs for things that are not Web pages, such as people, products,
places, ideas and concepts such as ontology classes. We give detailed
examples how the Semantic Web can (and should) be realised as a part of
the Web. The draft document is a practical guide for implementers of the
RDF specification. It explains two approaches for RDF data hosted on
HTTP servers (called 303 URIs and hash URIs). Intended audiences are
Web and ontology developers who have to decide how to model their RDF
URIs for use with HTTP. Applications using non-HTTP URIs are not covered.
This document is an informative guide covering selected aspects of
previously published, detailed technical specifications.
Friday, December 14, 2007
Flickr Upload Tool Turns 3.0, Goes Open-Source
Flickr has released a new version of its tool for uploading photos to
the Yahoo photo-sharing site, and made it an open-source program in the
process. Flickr Uploadr 3.0, available for Mac OS X 10.4 and 10.5 and
for Windows XP and Vista is now available in source code form, too,
governed by version 2 of the General Public License (GPL). Open-source
software may be freely modified, copied, and shared; opening source
code could let programmers modify the Uploadr tool so it works on Linux
or uploads to other photo-sharing sites, for example. Uploadr lets
photographers select photos for upload, add tags, organize them into
sets, and change privacy settings. Among the changes in Version 3 is
the ability to set the photo order in sets and to add new photos to
the upload queue while others are in the process of being transferred.
Flickr Stats shows whence visitors came to look at your photos, either
from within Flickr or outside on the Web. Stats also shows totals for
recent viewings of photos and compiles data such as how many photos
have tags, geotags, and comments. Views of your photos can be sorted
by viewing totals, comments, favorite status, and the ever-elusive
"interestingness" ranking.
the Yahoo photo-sharing site, and made it an open-source program in the
process. Flickr Uploadr 3.0, available for Mac OS X 10.4 and 10.5 and
for Windows XP and Vista is now available in source code form, too,
governed by version 2 of the General Public License (GPL). Open-source
software may be freely modified, copied, and shared; opening source
code could let programmers modify the Uploadr tool so it works on Linux
or uploads to other photo-sharing sites, for example. Uploadr lets
photographers select photos for upload, add tags, organize them into
sets, and change privacy settings. Among the changes in Version 3 is
the ability to set the photo order in sets and to add new photos to
the upload queue while others are in the process of being transferred.
Flickr Stats shows whence visitors came to look at your photos, either
from within Flickr or outside on the Web. Stats also shows totals for
recent viewings of photos and compiles data such as how many photos
have tags, geotags, and comments. Views of your photos can be sorted
by viewing totals, comments, favorite status, and the ever-elusive
"interestingness" ranking.
That's ISO not I-S-O
The next time you're talking about the standardization and the
International Organization for Standardization comes up, be sure to
pronounce it as [English /eye-so/] "Iso" and not "I-S-O." We say this
because ISO does not, in fact, stand for the International Organization
for Standardization (or the International Standardization Organization,
which doesn't even exist). We heard this neat tidbit the XML 2007
conference, held in Boston last week. Ken Holman, who this week steps
down from the role as the international secretary of the ISO subcommittee
responsible for the Standard Generalized Markup Language(SGML), gave a
briefing on ISO and related matters during the conference's lightening
round sessions Tuesday night. He noted that the ISO name actually comes
from "iso," the greek prefix for equal. For instance, Isometric refers
to the equality of measurement... Holman dropped another tidbit during
his talk as well. We may see a new ISO/IEC working group devoted to
office document formats, such as Office Document Format and the Microsoft
Office Open XML standard. First some hierarchy needs to be explained.
ISO works on a wide variety of standards, from everything from medical
equipment to film (ISO 400, ISO 200, etc.). In many information
technology standards designations, a lot of times we'll see ISO in
conjunction with IEC. For instance, ISO/IEC 13818 is the
internationally-approved designation for MPEG-2. The two bodies often
work together on IT standards. The International Electrotechnical
Commission (IEC) was founded a little over 100 years ago (by none other
than Lord Kelvin, among others!) to standardize the then-emerging field
of electrical componentry. Both IEC and ISO were doing work in IT, so
in order to eliminate duplication, they founded a joint body, called
the Joint Technical Committee (JTC 0001), the only working group
between the two organizations. JTC has a number of subcommittees,
handling standards from everything from biometrics to user interface
conventions. SC34 is the committee that begat SGML, which in turn
begat XML... SC34 itself has a number of different working groups.
WG 1 handles the data types and character types for XML documents.
WG 2 handles the presentation of documents, including the font
management and the like. WG 3 took the World Wide Web Consortium's
Hypertext Markup Language specification and made it an international
standard.
International Organization for Standardization comes up, be sure to
pronounce it as [English /eye-so/] "Iso" and not "I-S-O." We say this
because ISO does not, in fact, stand for the International Organization
for Standardization (or the International Standardization Organization,
which doesn't even exist). We heard this neat tidbit the XML 2007
conference, held in Boston last week. Ken Holman, who this week steps
down from the role as the international secretary of the ISO subcommittee
responsible for the Standard Generalized Markup Language(SGML), gave a
briefing on ISO and related matters during the conference's lightening
round sessions Tuesday night. He noted that the ISO name actually comes
from "iso," the greek prefix for equal. For instance, Isometric refers
to the equality of measurement... Holman dropped another tidbit during
his talk as well. We may see a new ISO/IEC working group devoted to
office document formats, such as Office Document Format and the Microsoft
Office Open XML standard. First some hierarchy needs to be explained.
ISO works on a wide variety of standards, from everything from medical
equipment to film (ISO 400, ISO 200, etc.). In many information
technology standards designations, a lot of times we'll see ISO in
conjunction with IEC. For instance, ISO/IEC 13818 is the
internationally-approved designation for MPEG-2. The two bodies often
work together on IT standards. The International Electrotechnical
Commission (IEC) was founded a little over 100 years ago (by none other
than Lord Kelvin, among others!) to standardize the then-emerging field
of electrical componentry. Both IEC and ISO were doing work in IT, so
in order to eliminate duplication, they founded a joint body, called
the Joint Technical Committee (JTC 0001), the only working group
between the two organizations. JTC has a number of subcommittees,
handling standards from everything from biometrics to user interface
conventions. SC34 is the committee that begat SGML, which in turn
begat XML... SC34 itself has a number of different working groups.
WG 1 handles the data types and character types for XML documents.
WG 2 handles the presentation of documents, including the font
management and the like. WG 3 took the World Wide Web Consortium's
Hypertext Markup Language specification and made it an international
standard.
XBRL Reaches Marquee Companies
Ford, General Electric, Infosys and Microsoft are already using the
Business Reporting Tags to file financial reports. Can a full SEC
mandate be far off? With the release of new Extensible Business Reporting
Language taxonomies and Microsoft's announcement Dec. 6 that it used
the technology to file its quarterly earnings report to the Securities
and Exchange Commission, XBRL is proving it is mature enough to warrant
widespread attention and adoption. Microsoft is currently only one of
61 companies to voluntarily use XBRL to make SEC filings, said Rob Blake,
senior director, Interactive Services, Bowne and Co., and a founding
member of the XBRL consortium, founded in 1999 to develop and maintain
the language. Those companies include Bowne itself, as well as Ford,
General Electric and Infosys, Blake said. The Federal Deposit Insurance
Corporation has also been using XBRL for two years, Blake said. "Every
financial institution in the U.S. that's regulated by the FDIC has
been using this language" to submit financial information to the FDIC,
he said. But some in the financial services industry already speculate
that the SEC is leaning towards mandating the use of the language in
reporting and filings. XBRL is likened to an XML schema, or digital
"bar code," which lets companies represent their data in a format
easily and quickly understood and processed by computers. The language
ensures that companies can accurately transmit financial data internally
and to investors, analysts and the SEC. The new taxonomies, based on
GAAP (Generally Accepted Accounting Principles) and released December
5, 2007 broaden and deepen the types of data to which XBRL can be
applied, making the language more accessible for companies across a
broader industry spectrum.
Business Reporting Tags to file financial reports. Can a full SEC
mandate be far off? With the release of new Extensible Business Reporting
Language taxonomies and Microsoft's announcement Dec. 6 that it used
the technology to file its quarterly earnings report to the Securities
and Exchange Commission, XBRL is proving it is mature enough to warrant
widespread attention and adoption. Microsoft is currently only one of
61 companies to voluntarily use XBRL to make SEC filings, said Rob Blake,
senior director, Interactive Services, Bowne and Co., and a founding
member of the XBRL consortium, founded in 1999 to develop and maintain
the language. Those companies include Bowne itself, as well as Ford,
General Electric and Infosys, Blake said. The Federal Deposit Insurance
Corporation has also been using XBRL for two years, Blake said. "Every
financial institution in the U.S. that's regulated by the FDIC has
been using this language" to submit financial information to the FDIC,
he said. But some in the financial services industry already speculate
that the SEC is leaning towards mandating the use of the language in
reporting and filings. XBRL is likened to an XML schema, or digital
"bar code," which lets companies represent their data in a format
easily and quickly understood and processed by computers. The language
ensures that companies can accurately transmit financial data internally
and to investors, analysts and the SEC. The new taxonomies, based on
GAAP (Generally Accepted Accounting Principles) and released December
5, 2007 broaden and deepen the types of data to which XBRL can be
applied, making the language more accessible for companies across a
broader industry spectrum.
Building a Grid System Using WS-Resource Transfer, Part 5: Using WS-RT
The WS-RT standard provides a new method for accessing and exchanging
information about resources between components. It is designed to
enhance the WS-Resource Framework (WSRF) and build on the WS-Transfer
standards. The WS-RT system extends previous resource solutions for
Web services and makes it easy not only to access resource information
by name but also to access individual elements of a larger data set
through the same mechanisms by exposing elements of an XML data set
through the Web services interfaces. In any grid, there is a huge amount
of metadata about the grid that needs to be stored and distributed.
Using WS-RT makes sharing the information, especially the precise
information required by different systems in the grid, significantly
easier. This article concludes the five-part "Building a grid system
using WS-Resource Transfer" series. Let's revisit some key elements of
the WS-RT system and how we've used it throughout the series to work
as a flexible solution for different grid solutions. The key to the
WS-RT system is the flexible method with which we can create and
recover information within its repository. Technically, WS-RT is not
seen as a general-purpose solution for the storage and recovery of
information, but, in fact, the XML structure and the ease with which
we can process information by using the QName and XPath dialects to
extract and update the information makes it a flexible and
easy-to-manipulate system for information storage and distribution.
It can be used on a number of levels, as we've seen throughout the
series, from the fundamentals of information storage to the
organization and definition of security information, and for the
distribution of work throughout the grid system. Using the flexible
nature of WS-RT makes the distribution of work easy and allows us to
bypass some of the problems and limitations that exist in other grid
systems.
information about resources between components. It is designed to
enhance the WS-Resource Framework (WSRF) and build on the WS-Transfer
standards. The WS-RT system extends previous resource solutions for
Web services and makes it easy not only to access resource information
by name but also to access individual elements of a larger data set
through the same mechanisms by exposing elements of an XML data set
through the Web services interfaces. In any grid, there is a huge amount
of metadata about the grid that needs to be stored and distributed.
Using WS-RT makes sharing the information, especially the precise
information required by different systems in the grid, significantly
easier. This article concludes the five-part "Building a grid system
using WS-Resource Transfer" series. Let's revisit some key elements of
the WS-RT system and how we've used it throughout the series to work
as a flexible solution for different grid solutions. The key to the
WS-RT system is the flexible method with which we can create and
recover information within its repository. Technically, WS-RT is not
seen as a general-purpose solution for the storage and recovery of
information, but, in fact, the XML structure and the ease with which
we can process information by using the QName and XPath dialects to
extract and update the information makes it a flexible and
easy-to-manipulate system for information storage and distribution.
It can be used on a number of levels, as we've seen throughout the
series, from the fundamentals of information storage to the
organization and definition of security information, and for the
distribution of work throughout the grid system. Using the flexible
nature of WS-RT makes the distribution of work easy and allows us to
bypass some of the problems and limitations that exist in other grid
systems.
Validation by Projection
Many of the architectures and strategies for validation apply validity
checking to a particular document with a pass or fail result on the
document. This assumes that the schemas used in validation are expressive
enough for all the potential versions of documents including any
extensions. We've regularly seen that the Schema 1.0 wildcard limits
the ability for fully describing documents. For example, it is
impossible to have a content model that has optional elements in
multiple namespaces with a wildcard at the end. The choice is to either
have the wildcard or the elements. There is another approach to
validation, called validation by projection, which effectively removes
any unknown content prior to validation. It is validation of a projection
of the XML document, where the projection is a subset of the xml document
with no other modifications to the contents including order. Part of
validation by projection is determining what to project. The simplest
rule for determining what to project is: Starting at the root element,
project any attributes and any elements that match elements in the
content model of the current complexType and recurse into each element.
[Author's note to W3C TAG: I wrote up a couple of personal blog entries
on validation by projection.This seems to be a useful way of achieving
forwards and backwards compatibility without relying upon schemasthat
havewildcards or open content models. From the TAG's definitional
perspective, I'd characterize validation by projection as an architecture
where the schema(s) define a Defined Text Set and an Accept Text Set
that is equal to the Defined Text Set, then the process of projection
is the creation and validation of the text against a generated Accept
Text Set that has the original Accept Text Set plus all possible extra
undefined elements and attributes.]
checking to a particular document with a pass or fail result on the
document. This assumes that the schemas used in validation are expressive
enough for all the potential versions of documents including any
extensions. We've regularly seen that the Schema 1.0 wildcard limits
the ability for fully describing documents. For example, it is
impossible to have a content model that has optional elements in
multiple namespaces with a wildcard at the end. The choice is to either
have the wildcard or the elements. There is another approach to
validation, called validation by projection, which effectively removes
any unknown content prior to validation. It is validation of a projection
of the XML document, where the projection is a subset of the xml document
with no other modifications to the contents including order. Part of
validation by projection is determining what to project. The simplest
rule for determining what to project is: Starting at the root element,
project any attributes and any elements that match elements in the
content model of the current complexType and recurse into each element.
[Author's note to W3C TAG: I wrote up a couple of personal blog entries
on validation by projection.This seems to be a useful way of achieving
forwards and backwards compatibility without relying upon schemasthat
havewildcards or open content models. From the TAG's definitional
perspective, I'd characterize validation by projection as an architecture
where the schema(s) define a Defined Text Set and an Accept Text Set
that is equal to the Defined Text Set, then the process of projection
is the creation and validation of the text against a generated Accept
Text Set that has the original Accept Text Set plus all possible extra
undefined elements and attributes.]
Standards for Personal Health Records
I have identified the four (4) major types of Personal Health records:
provider-hosted, payer-based, employer-sponsored and commercial. As
more products are offered, it's key that all the stakeholders involved
embrace national healthcare data standards to ensure interoperability
of the data placed in personal health records. To illustrate the point,
I am posting my entire lifelong medical record on my blog (this is with
my consent, so there are no HIPAA issues) in two ways. The first is a
PDF which was exported from a leading electronic health record system.
It's 77 pages long and contains a mixture of clinical data, administrative
data, normal and abnormal results, numeric observations, and notes.
It's a great deal of data, but is very challenging to understand, since
it does not provide an organized view of the key elements a clinician
needs to provide me ongoing care. It is not semantically interoperable,
which means that it cannot be read by computers to offer me or my
doctors the decision support that will improve my care. The second is
a Continuity of Care Document , using the national Health Information
Technology Standards Panel (HITSP) interoperability specifications.
It uses "Web 2.0" approaches, is XML based, machine and human readable,
and uses controlled vocabularies enabling computer-based decision support.
Today (December 13), HITSP will deliver the harmonized standards for
Personal Health Records, Labs, Emergency Records, and Quality measurement
to HHS Secretary Leavitt. These "interoperability specifications" will
become part of Federal contacting language and be incorporated into
vendor system certification criteria (CCHIT) over the next two years.
provider-hosted, payer-based, employer-sponsored and commercial. As
more products are offered, it's key that all the stakeholders involved
embrace national healthcare data standards to ensure interoperability
of the data placed in personal health records. To illustrate the point,
I am posting my entire lifelong medical record on my blog (this is with
my consent, so there are no HIPAA issues) in two ways. The first is a
PDF which was exported from a leading electronic health record system.
It's 77 pages long and contains a mixture of clinical data, administrative
data, normal and abnormal results, numeric observations, and notes.
It's a great deal of data, but is very challenging to understand, since
it does not provide an organized view of the key elements a clinician
needs to provide me ongoing care. It is not semantically interoperable,
which means that it cannot be read by computers to offer me or my
doctors the decision support that will improve my care. The second is
a Continuity of Care Document , using the national Health Information
Technology Standards Panel (HITSP) interoperability specifications.
It uses "Web 2.0" approaches, is XML based, machine and human readable,
and uses controlled vocabularies enabling computer-based decision support.
Today (December 13), HITSP will deliver the harmonized standards for
Personal Health Records, Labs, Emergency Records, and Quality measurement
to HHS Secretary Leavitt. These "interoperability specifications" will
become part of Federal contacting language and be incorporated into
vendor system certification criteria (CCHIT) over the next two years.
Why Revise HTTP?
By Mark Nottingham
I haven't talked about it here much, but I've spent a fair amount of
time over the last year and a half working with people in the IETF to
get RFC2616 (the HTTP specification) revised. HTTP started as a
protocol just for browsers, and its task was fairly simple. Yes,
persistent connections and ranged requests make things a bit more
complex, but the use cases were relatively homogenous almost a decade
ago, and the people doing the implementations were able to assure
interop for those common cases. Now, a new generation of developers are
using HTTP for things that weren't even thought of then; AJAX, Atom,
CalDAV, 'RESTful Web Services' and the like push the limits of what HTTP
is and can do. The dark corners that weren't looked at very closely in
the rush to get RFC2616 out are now coming to light, and cleaning them
up now will help these new uses, rather than encourage them to diverge
in how they use HTTP. So, while the focus of the WG is on implementors,
to me that doesn't must mean Apache, IIS, Mozilla, Squid and the like;
it also means people using HTTP to build new protocols, like OAuth and
Atom Publishing Protocol. It means people running large Web sites that
use HTTP in not-so-typical ways. Another reason to revise HTTP is that
there's a lot of things that the spec doesn't say.
I haven't talked about it here much, but I've spent a fair amount of
time over the last year and a half working with people in the IETF to
get RFC2616 (the HTTP specification) revised. HTTP started as a
protocol just for browsers, and its task was fairly simple. Yes,
persistent connections and ranged requests make things a bit more
complex, but the use cases were relatively homogenous almost a decade
ago, and the people doing the implementations were able to assure
interop for those common cases. Now, a new generation of developers are
using HTTP for things that weren't even thought of then; AJAX, Atom,
CalDAV, 'RESTful Web Services' and the like push the limits of what HTTP
is and can do. The dark corners that weren't looked at very closely in
the rush to get RFC2616 out are now coming to light, and cleaning them
up now will help these new uses, rather than encourage them to diverge
in how they use HTTP. So, while the focus of the WG is on implementors,
to me that doesn't must mean Apache, IIS, Mozilla, Squid and the like;
it also means people using HTTP to build new protocols, like OAuth and
Atom Publishing Protocol. It means people running large Web sites that
use HTTP in not-so-typical ways. Another reason to revise HTTP is that
there's a lot of things that the spec doesn't say.
Exploring Validation in an End-to-end XML Architecture
An application architecture that uses XML for data storage and message
passing throughout the life cycle of the data can leverage powerful
data validation techniques early and often. It can identify problems
with the data quickly and greatly increase the overall assurance of the
correctness of that data. XForms fits naturally into such an architecture,
as it requires users to enter their data as XML. As a result, XForms
provides a direct interface to the power of XML validation tools for
immediate and meaningful feedback to the user about any problems in
the data. Further, the use of validation components enables and
encourages the reuse of these components at other points of entry into
the system, or at other system boundaries. This article sketches the
uses of validation in an architecture that operates over XML data that
is at least partly entered through human-computer interaction. At the
data entry end, the architecture validates data produced from XForms
by re-purposing Schematron validation rules for different approaches
to helping the user understand problems that might exist in the data.
The article also briefly mentions an orthogonal use of the same
Schematron rules for validating mappings from legacy data formats such
as relational databases to a local XML schema. Multiplexing validation
components for multiple uses allows a pervasive level of quality of
XML content at the point of entry and retrospectively.
passing throughout the life cycle of the data can leverage powerful
data validation techniques early and often. It can identify problems
with the data quickly and greatly increase the overall assurance of the
correctness of that data. XForms fits naturally into such an architecture,
as it requires users to enter their data as XML. As a result, XForms
provides a direct interface to the power of XML validation tools for
immediate and meaningful feedback to the user about any problems in
the data. Further, the use of validation components enables and
encourages the reuse of these components at other points of entry into
the system, or at other system boundaries. This article sketches the
uses of validation in an architecture that operates over XML data that
is at least partly entered through human-computer interaction. At the
data entry end, the architecture validates data produced from XForms
by re-purposing Schematron validation rules for different approaches
to helping the user understand problems that might exist in the data.
The article also briefly mentions an orthogonal use of the same
Schematron rules for validating mappings from legacy data formats such
as relational databases to a local XML schema. Multiplexing validation
components for multiple uses allows a pervasive level of quality of
XML content at the point of entry and retrospectively.
The ROI of XForms
This article examines several methods of calculating the Return on
Investment (ROI) of adopting enterprise-wide XForms standards. It
explores ROI analysis from several different viewpoints, including
the standards perspective and issues around vendor lock-in avoidance
strategies. It discusses three ROI models for an enterprise XForms
migration: The use of vendor knowledge to convert standard forms to
a rich Web-client-based XForms application; an investment and savings
calculation over a three-year period; and how XForms can form a
synergistic relationship with XML-centric technologies such as Service
Oriented Architecture (SOA) Business Process Management (BPM). The
article concludes with a discussion on how to overcome common objections
to an XForms initiative. HTML was never designed as an application
development language. XForms is a powerful and deep-reaching technology
that could have a large impact on an organization's overall IT strategy.
On the surface, there are around twenty data elements that are added
to XHTML pages to enhance usability. However, underlying XForms is a
change in the contract between the browser and all Web-based
applications. It changes the Web browser from a "dumb" device that
allows you to navigate between Web pages to a "smart" device with a
clean and elegant architecture that can load intelligent Web
applications and execute as-you-type business rules. When coupled with
other XML-centric technologies such as SOA/ESB and BPM, XForms can give
an organization a large return on investment.
Investment (ROI) of adopting enterprise-wide XForms standards. It
explores ROI analysis from several different viewpoints, including
the standards perspective and issues around vendor lock-in avoidance
strategies. It discusses three ROI models for an enterprise XForms
migration: The use of vendor knowledge to convert standard forms to
a rich Web-client-based XForms application; an investment and savings
calculation over a three-year period; and how XForms can form a
synergistic relationship with XML-centric technologies such as Service
Oriented Architecture (SOA) Business Process Management (BPM). The
article concludes with a discussion on how to overcome common objections
to an XForms initiative. HTML was never designed as an application
development language. XForms is a powerful and deep-reaching technology
that could have a large impact on an organization's overall IT strategy.
On the surface, there are around twenty data elements that are added
to XHTML pages to enhance usability. However, underlying XForms is a
change in the contract between the browser and all Web-based
applications. It changes the Web browser from a "dumb" device that
allows you to navigate between Web pages to a "smart" device with a
clean and elegant architecture that can load intelligent Web
applications and execute as-you-type business rules. When coupled with
other XML-centric technologies such as SOA/ESB and BPM, XForms can give
an organization a large return on investment.
W3C Invites Implementations of Pronunciation Lexicon Specification (PLS)
Members of the W3C Voice Browser Working Group have published the
Candidate Recommendation for "Pronunciation Lexicon Specification (PLS)
Version 1.0." Implementation feedback is welcome through 11-April-2008.
A PLS 1.0 Implementation Report Plan is available Implementation Report
objectives are to verify that the specification is implementable; testing
must demonstrate interoperability of implementations of the specification.
A test report must indicate the outcome of each test. Possible outcomes
are pass, fail, or not-implemented. The Pronunciation Lexicon Specification
provides the basis for describing pronunciation information for use in
speech recognition and speech synthesis, for use in tuning applications,
e.g. for proper names that have irregular pronunciations. PLS is designed
to enable interoperable specification of pronunciation information for
both Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) engines,
which internally provide extensive high quality lexicons with pronunciation
information for many words or phrases. To ensure a maximum coverage of
the words or phrases used by an application, application-specific
pronunciations may be required. The Working Group has also updated Speech
Synthesis Markup Language (SSML) Version 1.1. The Speech Synthesis Markup
Language Specification is one of these standards and is designed to
provide a rich, XML-based markup language for assisting the generation
of synthetic speech in Web and other applications. The essential role
of the markup language is to provide authors of synthesizable content
a standard way to control aspects of speech such as pronunciation,
volume, pitch, rate, etc. across different synthesis-capable platforms.
Changes from the previous draft include addition of new "type" attribute
with value of "ruby", change of references to "pronunciation alphabet"
to be "pronunciation scheme", and modified attribute's names of audio
element.
Candidate Recommendation for "Pronunciation Lexicon Specification (PLS)
Version 1.0." Implementation feedback is welcome through 11-April-2008.
A PLS 1.0 Implementation Report Plan is available Implementation Report
objectives are to verify that the specification is implementable; testing
must demonstrate interoperability of implementations of the specification.
A test report must indicate the outcome of each test. Possible outcomes
are pass, fail, or not-implemented. The Pronunciation Lexicon Specification
provides the basis for describing pronunciation information for use in
speech recognition and speech synthesis, for use in tuning applications,
e.g. for proper names that have irregular pronunciations. PLS is designed
to enable interoperable specification of pronunciation information for
both Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) engines,
which internally provide extensive high quality lexicons with pronunciation
information for many words or phrases. To ensure a maximum coverage of
the words or phrases used by an application, application-specific
pronunciations may be required. The Working Group has also updated Speech
Synthesis Markup Language (SSML) Version 1.1. The Speech Synthesis Markup
Language Specification is one of these standards and is designed to
provide a rich, XML-based markup language for assisting the generation
of synthetic speech in Web and other applications. The essential role
of the markup language is to provide authors of synthesizable content
a standard way to control aspects of speech such as pronunciation,
volume, pitch, rate, etc. across different synthesis-capable platforms.
Changes from the previous draft include addition of new "type" attribute
with value of "ruby", change of references to "pronunciation alphabet"
to be "pronunciation scheme", and modified attribute's names of audio
element.
DITA Specialization Support: It Should Just Work
DITA's specialization mechanism both enables sophisticated generic
processing and effectively demands that tools provide it. That is,
when presented with valid, conforming DITA documents, tools should
"just work," applying all appropriate default DITA processing and
behavior without any up-front configuration (with the possible exception
of specifying the entity resolution catalog needed to resolve references
to DTDs and schemas). Not many tools beyond the DITA Open Toolkit
actually do just work. RSuite does. In particular, it uses the DITA
1.1 DITAArchVersion attribute to reliably detect DITA documents
regardless of what local declaration set or specializations they use.
As both an integrator and a provider of a tool designed to be integrated
those tools that also just work offer the greatest value to me as an
integrator and service provider. I would like to see all DITA-aware
tools provide the same level of automatic configuration and processing.
processing and effectively demands that tools provide it. That is,
when presented with valid, conforming DITA documents, tools should
"just work," applying all appropriate default DITA processing and
behavior without any up-front configuration (with the possible exception
of specifying the entity resolution catalog needed to resolve references
to DTDs and schemas). Not many tools beyond the DITA Open Toolkit
actually do just work. RSuite does. In particular, it uses the DITA
1.1 DITAArchVersion attribute to reliably detect DITA documents
regardless of what local declaration set or specializations they use.
As both an integrator and a provider of a tool designed to be integrated
those tools that also just work offer the greatest value to me as an
integrator and service provider. I would like to see all DITA-aware
tools provide the same level of automatic configuration and processing.
DITA: Does One Size Fit All?
The XML dialect of choice is the new DITA (Darwin Information Typing
Architecture), developed originally by IBM and now an OASIS standard.
Of twelve XML editors reviewed in June 2006, eight now do DITA, and
one new WYSIWYG XML authoring tool has entered the market that does
only DITA. The Arbortext Editor, formerly known as the Epic Editor,
has been doing DITA as long as anyone, years before it became an OASIS
standard. New owner PTC bought Arbortext because its major customers
used it and they wanted to integrate the production of technical
documentation into the product design process. PTC is "drinking its
own champagne" as they convert their own documentation to DITA. They
now also offer a ready-made DITA application that does 90 percent of
the work of producing a fully-designed service manual. XMetaL was the
first XML editor back in 1996 and it jumped on the OASIS DITA standard,
integrating the DITA Open Toolkit end-to-end publishing solution
bandwagon early. They quickly earned DITA authors mind share and were
acquired by Japanese XML publishing powerhouse Justsystems. Adobe
added a DITA application pack accessory to FrameMaker 7.2 and have now
integrated DITA completely in release 8. The latest XML editor in my
2006 study to add DITA support is SyncRO Soft, a tool popular in
academic institutions because it is Java based and runs on all
platforms: Windows, Macintosh, and Linux. Syntext Serna is another
multi-platform XML editor that, like Arbortext, has been doing DITA
its own way for some years. Even less expensive for freelance writers
getting started with DITA is the XMLmind XML Editor. XXE is downloadable
at no cost for personal use.
Architecture), developed originally by IBM and now an OASIS standard.
Of twelve XML editors reviewed in June 2006, eight now do DITA, and
one new WYSIWYG XML authoring tool has entered the market that does
only DITA. The Arbortext Editor, formerly known as the Epic Editor,
has been doing DITA as long as anyone, years before it became an OASIS
standard. New owner PTC bought Arbortext because its major customers
used it and they wanted to integrate the production of technical
documentation into the product design process. PTC is "drinking its
own champagne" as they convert their own documentation to DITA. They
now also offer a ready-made DITA application that does 90 percent of
the work of producing a fully-designed service manual. XMetaL was the
first XML editor back in 1996 and it jumped on the OASIS DITA standard,
integrating the DITA Open Toolkit end-to-end publishing solution
bandwagon early. They quickly earned DITA authors mind share and were
acquired by Japanese XML publishing powerhouse Justsystems. Adobe
added a DITA application pack accessory to FrameMaker 7.2 and have now
integrated DITA completely in release 8. The latest XML editor in my
2006 study to add DITA support is SyncRO Soft, a tool popular in
academic institutions because it is Java based and runs on all
platforms: Windows, Macintosh, and Linux. Syntext Serna is another
multi-platform XML editor that, like Arbortext, has been doing DITA
its own way for some years. Even less expensive for freelance writers
getting started with DITA is the XMLmind XML Editor. XXE is downloadable
at no cost for personal use.
AIIM Adopts Strategic Markup Language (StratML)
The AIIM Standards Board has announced that it is adding Strategic
Markup Language (StratML) to its standards program of work. AIIM,
based in Silver Spring, Md., is an enterprise content management
association. Owen Ambur, former senior architect at the Interior
Department, and Adam Schwartz, a program analyst in the Program
Management Office at the Government Printing Office, oversaw development
of that schema, which is designed to encapsulate strategic plans,
performance plans and performance reports in a format based on
Extensible Markup Language (XML), the association said last week. The
standardized XML template and vocabulary will allow agencies and other
organizations to encode their plans and reports so that they can be
easily indexed, shared and processed. They will also allow agencies to
ensure that those products align with policies, standards, goals and
objectives. Four applications have been shown to support StratML:
Microsoft's InfoPath and Word applications, Business Web Software's
AchieveForms, and FormRouter's PDF-Fillable, Ambur said. In addition,
Mark Logic Corp. is developing a StratML search service, and
HyperVision is drafting a quick start guide for Word users. John
Weiler, executive director and co-founder of the Interoperability
Clearinghouse, which brings together standards group for collaboration,
said ICH was looking for ways to funnel strategic planning information
into the architecture and acquisition process. StratML could play a part.
Markup Language (StratML) to its standards program of work. AIIM,
based in Silver Spring, Md., is an enterprise content management
association. Owen Ambur, former senior architect at the Interior
Department, and Adam Schwartz, a program analyst in the Program
Management Office at the Government Printing Office, oversaw development
of that schema, which is designed to encapsulate strategic plans,
performance plans and performance reports in a format based on
Extensible Markup Language (XML), the association said last week. The
standardized XML template and vocabulary will allow agencies and other
organizations to encode their plans and reports so that they can be
easily indexed, shared and processed. They will also allow agencies to
ensure that those products align with policies, standards, goals and
objectives. Four applications have been shown to support StratML:
Microsoft's InfoPath and Word applications, Business Web Software's
AchieveForms, and FormRouter's PDF-Fillable, Ambur said. In addition,
Mark Logic Corp. is developing a StratML search service, and
HyperVision is drafting a quick start guide for Word users. John
Weiler, executive director and co-founder of the Interoperability
Clearinghouse, which brings together standards group for collaboration,
said ICH was looking for ways to funnel strategic planning information
into the architecture and acquisition process. StratML could play a part.
Wednesday, December 12, 2007
Use Castor for XML Data Binding
This article shows how to convert Java classes to XML and transform that
XML back into Java code, as well as how Castor works and how to design
your classes to function well with the API. The most basic operation in
Castor is to take a Java class and marshal an instance of that class to
XML. You take the class itself and use it as a top-level container element.
You always marshal an instance of a class, not the class itself. A class
is structure, and is best equated to an XML constraint model, like a
DTD or XML Schema. A class on its own has no data, and merely defines
the structure for data to be stored, as well as how it can be accessed.
You instantiate (or obtain from a factory or other instance-producing
mechanism) that class to give it a specific form. Then, you populate the
fields of that instance with actual data. That instance is unique; it
bears the same structure as any other instances of the same class, but
the data is separate. Notice what Castor does not preserve in the XML:
(1) The package of the Java class: a Java package is not part of a class's
structure. It's actually a semantic issue, related to Java namespaces.
So you could unmarshall (convert from XML to Java code) this XML document
to any Book instance that had the same three properties, regardless of
package. (2) Field ordering: order matters in XML, but not in Java
programming. So even though the source file listed the fields in one
order, the XML document used another. That's important in your XML, but
irrelevant in your Book class declaration. (2) Methods: methods, like
a package declaration, have nothing to do with data structuring. So the
XML document doesn't do anything with them; they're ignored. Article
prerequisite: you would do well to have some classes you'd like to
convert to and from XML from a project you're working on. There are
sample classes provided with this and the previous article, but your
own mastery of Castor is best achieved if you apply what you see here
to your own projects.
XML back into Java code, as well as how Castor works and how to design
your classes to function well with the API. The most basic operation in
Castor is to take a Java class and marshal an instance of that class to
XML. You take the class itself and use it as a top-level container element.
You always marshal an instance of a class, not the class itself. A class
is structure, and is best equated to an XML constraint model, like a
DTD or XML Schema. A class on its own has no data, and merely defines
the structure for data to be stored, as well as how it can be accessed.
You instantiate (or obtain from a factory or other instance-producing
mechanism) that class to give it a specific form. Then, you populate the
fields of that instance with actual data. That instance is unique; it
bears the same structure as any other instances of the same class, but
the data is separate. Notice what Castor does not preserve in the XML:
(1) The package of the Java class: a Java package is not part of a class's
structure. It's actually a semantic issue, related to Java namespaces.
So you could unmarshall (convert from XML to Java code) this XML document
to any Book instance that had the same three properties, regardless of
package. (2) Field ordering: order matters in XML, but not in Java
programming. So even though the source file listed the fields in one
order, the XML document used another. That's important in your XML, but
irrelevant in your Book class declaration. (2) Methods: methods, like
a package declaration, have nothing to do with data structuring. So the
XML document doesn't do anything with them; they're ignored. Article
prerequisite: you would do well to have some classes you'd like to
convert to and from XML from a project you're working on. There are
sample classes provided with this and the previous article, but your
own mastery of Castor is best achieved if you apply what you see here
to your own projects.
Ulteo Brings OpenOffice to Web Browser
Ulteo, a company staffed by Linux veterans, has launched the test
version of a service that lets people run the OpenOffice.org desktop
suite in the Firefox or Internet Explorer browsers. The hosted version
of OpenOffice version 3.2 supports PDF printing. The service is designed
to let people collaborate with OpenOffice documents online and use the
open-source application suite without having to download it. People
can also exchange documents in Microsoft's Office format or PDF. The
service also supports the OpenDocument Format standard. Already several
companies are offering online versions of traditional desktop
applications, including Google, Zoho, and others. Microsoft recently
released Office Live Workspace, which lets people share Office documents
on a hosted Web server. The Ulteo service is aimed specifically toward
people who use the OpenOffice suite. From the announcement: "As well as
offering instant 'no-install' access, Ulteo's service also provides
OpenOffice.org users with instant collaboration capabilities. A user
working with OpenOffice.org on the Ulteo server can invite other people
to work with him or her on a shared document in real time. Invitations
are sent via email and allow access in either read only or full edit
mode, simply by clicking on a link in the email." More Information
version of a service that lets people run the OpenOffice.org desktop
suite in the Firefox or Internet Explorer browsers. The hosted version
of OpenOffice version 3.2 supports PDF printing. The service is designed
to let people collaborate with OpenOffice documents online and use the
open-source application suite without having to download it. People
can also exchange documents in Microsoft's Office format or PDF. The
service also supports the OpenDocument Format standard. Already several
companies are offering online versions of traditional desktop
applications, including Google, Zoho, and others. Microsoft recently
released Office Live Workspace, which lets people share Office documents
on a hosted Web server. The Ulteo service is aimed specifically toward
people who use the OpenOffice suite. From the announcement: "As well as
offering instant 'no-install' access, Ulteo's service also provides
OpenOffice.org users with instant collaboration capabilities. A user
working with OpenOffice.org on the Ulteo server can invite other people
to work with him or her on a shared document in real time. Invitations
are sent via email and allow access in either read only or full edit
mode, simply by clicking on a link in the email." More Information
OASIS SAML TC Releases Bindings and Profile Specifications for Review
OASIS announced that the Security Services (SAML) Technical Committee
has released five approved Committee Draft specifications for public
review. These specifications are followon deliverables to SAML version
2.0. (1) "SAMLv2.0 HTTP POST 'SimpleSign' Binding" provides an addition
to the bindings described in "Bindings for the OASIS Security Assertion
Markup Language (SAML) V2.0." It defines a SAML HTTP protocol binding,
specifically using the HTTP POST method, and not using XML Digital
Signature for SAML message data origination authentication. Rather, a
'sign the BLOB' technique is employed wherein a conveyed SAML message
is treated as a simple octet string if it is signed. Conveyed SAML
assertions may be individually signed using XMLdsig. Security is optional
in this binding. (2) "Identity Provider Discovery Service Protocol and
Profile" is an alternative to the SAML V2.0 Identity Provider Discovery
profile in the "Profiles for the OASIS Security Assertion Markup Language
(SAML) V2.0" specification. It defines a generic browser-based protocol
by which a centralized discovery service implemented independently of
a given service provider can provide a requesting service provider with
the unique identifier of an identity provider that can authenticate a
principal. (3) "SAML V2.0 Attribute Sharing Profile for X.509
Authentication-Based Systems" is an alternative to "SAML V2.0 Deployment
Profiles for X.509 Subjects." This deployment profile specifies the use
of SAML V2.0 attribute queries and assertions to support distributed
authorization in support of X.509-based authentication. (4) "SAML V2.0
Deployment Profiles for X.509 Subjects" is an alternative to " SAML V2.0
Attribute Sharing Profile for X.509 Authentication-Based Systems." This
related set of SAML V2.0 deployment profiles specifies how a principal
who has been issued an X.509 identity certificate is represented as a
SAML Subject, how an assertion regarding such a principal is produced
and consumed, and finally how two entities exchange attributes about
such a principal. (5) "SAML V2.0 X.500/LDAP Attribute Profile" supersedes
the X.500/LDAP Attribute Profile in the original OASIS Standard "Profiles
for the OASIS Security Assertion Markup Language (SAML) V2.0." The
original profile results in well-formed but schema-invalid XML and
cannot be corrected without a normative change. More Information
has released five approved Committee Draft specifications for public
review. These specifications are followon deliverables to SAML version
2.0. (1) "SAMLv2.0 HTTP POST 'SimpleSign' Binding" provides an addition
to the bindings described in "Bindings for the OASIS Security Assertion
Markup Language (SAML) V2.0." It defines a SAML HTTP protocol binding,
specifically using the HTTP POST method, and not using XML Digital
Signature for SAML message data origination authentication. Rather, a
'sign the BLOB' technique is employed wherein a conveyed SAML message
is treated as a simple octet string if it is signed. Conveyed SAML
assertions may be individually signed using XMLdsig. Security is optional
in this binding. (2) "Identity Provider Discovery Service Protocol and
Profile" is an alternative to the SAML V2.0 Identity Provider Discovery
profile in the "Profiles for the OASIS Security Assertion Markup Language
(SAML) V2.0" specification. It defines a generic browser-based protocol
by which a centralized discovery service implemented independently of
a given service provider can provide a requesting service provider with
the unique identifier of an identity provider that can authenticate a
principal. (3) "SAML V2.0 Attribute Sharing Profile for X.509
Authentication-Based Systems" is an alternative to "SAML V2.0 Deployment
Profiles for X.509 Subjects." This deployment profile specifies the use
of SAML V2.0 attribute queries and assertions to support distributed
authorization in support of X.509-based authentication. (4) "SAML V2.0
Deployment Profiles for X.509 Subjects" is an alternative to " SAML V2.0
Attribute Sharing Profile for X.509 Authentication-Based Systems." This
related set of SAML V2.0 deployment profiles specifies how a principal
who has been issued an X.509 identity certificate is represented as a
SAML Subject, how an assertion regarding such a principal is produced
and consumed, and finally how two entities exchange attributes about
such a principal. (5) "SAML V2.0 X.500/LDAP Attribute Profile" supersedes
the X.500/LDAP Attribute Profile in the original OASIS Standard "Profiles
for the OASIS Security Assertion Markup Language (SAML) V2.0." The
original profile results in well-formed but schema-invalid XML and
cannot be corrected without a normative change. More Information
Call for Implementations: Extensible MultiModal Annotation Markup Language
W3C has issued a call for implementations of the "EMMA: Extensible
MultiModal Annotation Markup Language" specification, recently advanced
to the stage of Candidate Recommendation. W3C publishes a technical
report as a Candidate Recommendation to indicate that the document is
believed to be stable, and to encourage implementation by the developer
community. Implementation feedback is welcome through 14-April-2008. The
EMMA specification has been produced by members of the W3C Multimodal
Interaction Working Group as part of W3C's Multimodal Interaction Activity.
EMMA is a data exchange format for the interface between input processors
and interaction management systems within the Multimodal Architecture and
Interfaces, and defines the means to annotate application specific data
with information such as confidence scores, time stamps, input mode,
alternative recognition hypotheses, and partial recognition results. The
W3C Multimodal Interaction working group aims to develop specifications
to enable access to the Web using multimodal interaction. This document
is part of a set of specifications for multimodal systems, and provides
details of an XML markup language for containing and annotating the
interpretation of user input. Examples of interpretation of user input
are a transcription into words of a raw signal, for instance derived
from speech, pen or keystroke input, a set of attribute/value pairs
describing their meaning, or a set of attribute/value pairs describing
a gesture. The interpretation of the user's input is expected to be
generated by signal interpretation processes, such as speech and ink
recognition, semantic interpreters, and other types of processors for
use by components that act on the user's inputs such as interaction
managers. More Information
MultiModal Annotation Markup Language" specification, recently advanced
to the stage of Candidate Recommendation. W3C publishes a technical
report as a Candidate Recommendation to indicate that the document is
believed to be stable, and to encourage implementation by the developer
community. Implementation feedback is welcome through 14-April-2008. The
EMMA specification has been produced by members of the W3C Multimodal
Interaction Working Group as part of W3C's Multimodal Interaction Activity.
EMMA is a data exchange format for the interface between input processors
and interaction management systems within the Multimodal Architecture and
Interfaces, and defines the means to annotate application specific data
with information such as confidence scores, time stamps, input mode,
alternative recognition hypotheses, and partial recognition results. The
W3C Multimodal Interaction working group aims to develop specifications
to enable access to the Web using multimodal interaction. This document
is part of a set of specifications for multimodal systems, and provides
details of an XML markup language for containing and annotating the
interpretation of user input. Examples of interpretation of user input
are a transcription into words of a raw signal, for instance derived
from speech, pen or keystroke input, a set of attribute/value pairs
describing their meaning, or a set of attribute/value pairs describing
a gesture. The interpretation of the user's input is expected to be
generated by signal interpretation processes, such as speech and ink
recognition, semantic interpreters, and other types of processors for
use by components that act on the user's inputs such as interaction
managers. More Information
Tuesday, December 11, 2007
PingFederate Web Services Provides WS-Trust Security Token Service (STS)
Ping Identity announced that PingFederate Web Services 2.6 is available
for immediate download from its Web site. Now packaged as an optional
add-on module for PingFederate, Ping Identity's industry-leading
standalone federated identity software, PingFederate Web Services 2.6
adds support for the OASIS WS-Trust 1.3 standard, as well as the ability
to create and validate CA SiteMinder SMSESSION tokens. PingFederate Web
Services, previously called PingTrust, is an optional PingFederate module
designed for organizations wanting to extend their browser-based Internet
Single Sign-On architecture to incorporate Web services and
Service-Oriented Architectures (SOAs). It acts as a WS-Trust Security
Token Service (STS), creating and validating security tokens that get
bound into SOAP messages to carry user identity information in a
standards-based manner. PingFederate Web Services 2.6 adds support for
OASIS WS-Trust version 1.3, the first version of WS-Trust to be published
as an official industry standard by OASIS. In addition, it adds the
ability to create and validate SMSESSION tokens. With this new capability,
SiteMinder-enabled enterprises can create Web service clients and
providers that use WS-Trust to issue or validate proprietary SMSESSION
tokens, as well as exchange SMSESSION tokens for other token types such
as SAML assertions.
for immediate download from its Web site. Now packaged as an optional
add-on module for PingFederate, Ping Identity's industry-leading
standalone federated identity software, PingFederate Web Services 2.6
adds support for the OASIS WS-Trust 1.3 standard, as well as the ability
to create and validate CA SiteMinder SMSESSION tokens. PingFederate Web
Services, previously called PingTrust, is an optional PingFederate module
designed for organizations wanting to extend their browser-based Internet
Single Sign-On architecture to incorporate Web services and
Service-Oriented Architectures (SOAs). It acts as a WS-Trust Security
Token Service (STS), creating and validating security tokens that get
bound into SOAP messages to carry user identity information in a
standards-based manner. PingFederate Web Services 2.6 adds support for
OASIS WS-Trust version 1.3, the first version of WS-Trust to be published
as an official industry standard by OASIS. In addition, it adds the
ability to create and validate SMSESSION tokens. With this new capability,
SiteMinder-enabled enterprises can create Web service clients and
providers that use WS-Trust to issue or validate proprietary SMSESSION
tokens, as well as exchange SMSESSION tokens for other token types such
as SAML assertions.
Paris Welcomes Ruby on Rails 2.0
Version 2.0 of Ruby on Rails was released Friday [2007-12-07]. Rails
offers a framework of tools for developing Web sites using Ruby, a
programming language invented in 1995 by Yukihiro Matsumoto. Hansson,
of Web application developer 37signals, joined the Paris on Rails
conference by video-link to present the changes: "In 2.0 we're making
a really strong statement about RESTful application design," he said,
referring to the new version's preference for REST (Representational
State Transfer) rather than SOAP (Simple Object Access Protocol) for
passing messages in Web applications. Ruby now has support from industry
stalwarts like Sun Microsystems and Microsoft. Sun recently hired the
developers of JRuby, an implementation of Ruby for the Java virtual
machine that allows Ruby on Rails developers to make use of the work
enterprises have already put into developing Java application frameworks.
Microsoft, for its part, hired the developer of RubyCLR, a bridge between
Ruby and Microsoft's .Net framework, allowing Rails developers to
similarly leverage businesses' .Net legacy. In the security space, Rails
2.0 makes it easier to protect against phishing, with provisions to
guard against CRSF (cross-site request forgery) intrusions. Safeguards
against XSF (cross-site forgery) attacks are included as well. Also
featured in Rails 2.0 is improved testing support and backing for Atom
feeds. Hansson: "We're making it really easy for applications to emit
feeds, which is critical to application updates." Another new feature in
version 2.0 is a framework called ActiveResource, which encapsulates Web
services and makes them as easy to use as databases, Hansson said. This
is similar to the ActiveRecord feature for encapsulating database calls
in Rails.
offers a framework of tools for developing Web sites using Ruby, a
programming language invented in 1995 by Yukihiro Matsumoto. Hansson,
of Web application developer 37signals, joined the Paris on Rails
conference by video-link to present the changes: "In 2.0 we're making
a really strong statement about RESTful application design," he said,
referring to the new version's preference for REST (Representational
State Transfer) rather than SOAP (Simple Object Access Protocol) for
passing messages in Web applications. Ruby now has support from industry
stalwarts like Sun Microsystems and Microsoft. Sun recently hired the
developers of JRuby, an implementation of Ruby for the Java virtual
machine that allows Ruby on Rails developers to make use of the work
enterprises have already put into developing Java application frameworks.
Microsoft, for its part, hired the developer of RubyCLR, a bridge between
Ruby and Microsoft's .Net framework, allowing Rails developers to
similarly leverage businesses' .Net legacy. In the security space, Rails
2.0 makes it easier to protect against phishing, with provisions to
guard against CRSF (cross-site request forgery) intrusions. Safeguards
against XSF (cross-site forgery) attacks are included as well. Also
featured in Rails 2.0 is improved testing support and backing for Atom
feeds. Hansson: "We're making it really easy for applications to emit
feeds, which is critical to application updates." Another new feature in
version 2.0 is a framework called ActiveResource, which encapsulates Web
services and makes them as easy to use as databases, Hansson said. This
is similar to the ActiveRecord feature for encapsulating database calls
in Rails.
Friday, December 7, 2007
XML Conference 2007 Second Day
This is the continuation of blogging from XML Conference 2007. There
are, of course, a lot of folks blogging about the conference. (1)
Dorothy Hoskins: Outside-In XML Publishing... What role can XML
play at the prettiest end of the print production spectrum? Instead
of struggling with XSL-FO in these cases, develop XML outside of
your formatting system and then eventually import your content near
the end. Both InDesign and FrameMaker are good options for this route.
FrameMaker 8 has good integration with DITA, in particular. (2)
Lisa Bos: Current Trends in XML Content Management Systems... Since
2000, publishers have grasped the importance of XML, but in the early
days there were not any solutions that fit them well. Today, there
are a huge numbers of XML products targeted toward publishers, some
of which is actually helpful. (3) Robin Doran and Matthew Browning:
BBC iPlayer Content production: The Evolution of an XML Tool-Chain...
The iPlayer is being developed to allow streaming of scheduled BBC
TV and Radio shows. The scheduling information itself is quite complex
and delivered in the emerging XML standard called TVA, which the BBC
is helping along. (4) Micah Dubinko WebPath: Querying the web as XML
Web Tools... Pulling random XML off of the web rarely works as promised,
though some have exaggerated this problem. (5) Mark Birbeck: XForms,
REST, XQuery, and skimming... The client in web applications is too
thin and provides insufficient technology to make building web
applications easy. XForms explicitly allows these functions to be
broken discretely apart. With XForms, as with Ajax, automatic UI updates
without reloads are possible, but this bit is well publicized. Less
commonly talked about is the ability to drive the UI with data
types -- datetimes with a date selector. More Information See also Elliotte Rusty Harold blog: Click Here
are, of course, a lot of folks blogging about the conference. (1)
Dorothy Hoskins: Outside-In XML Publishing... What role can XML
play at the prettiest end of the print production spectrum? Instead
of struggling with XSL-FO in these cases, develop XML outside of
your formatting system and then eventually import your content near
the end. Both InDesign and FrameMaker are good options for this route.
FrameMaker 8 has good integration with DITA, in particular. (2)
Lisa Bos: Current Trends in XML Content Management Systems... Since
2000, publishers have grasped the importance of XML, but in the early
days there were not any solutions that fit them well. Today, there
are a huge numbers of XML products targeted toward publishers, some
of which is actually helpful. (3) Robin Doran and Matthew Browning:
BBC iPlayer Content production: The Evolution of an XML Tool-Chain...
The iPlayer is being developed to allow streaming of scheduled BBC
TV and Radio shows. The scheduling information itself is quite complex
and delivered in the emerging XML standard called TVA, which the BBC
is helping along. (4) Micah Dubinko WebPath: Querying the web as XML
Web Tools... Pulling random XML off of the web rarely works as promised,
though some have exaggerated this problem. (5) Mark Birbeck: XForms,
REST, XQuery, and skimming... The client in web applications is too
thin and provides insufficient technology to make building web
applications easy. XForms explicitly allows these functions to be
broken discretely apart. With XForms, as with Ajax, automatic UI updates
without reloads are possible, but this bit is well publicized. Less
commonly talked about is the ability to drive the UI with data
types -- datetimes with a date selector. More Information See also Elliotte Rusty Harold blog: Click Here
DataDirect Updates XML Converters and XQuery Engine
Connectivity software developer DataDirect Technologies has launched
version 3.1 of the XML converter and XQuery engine at the XML 2007
conference and exposition in Boston. According to DataDirect, the
new version of XML converters for Java and .NET provide bi-directional,
programmatic access to non-XML files, including electronic data
interchange (EDI), flat files and other legacy formats. It also
offers API support to dynamically fetch the XML schema for conversion
and standard exchange format (SEF) support for custom EDI needs.
The company said that the converters also support B2B integration
standards such as X12, EDI for administration commerce and transport,
international air transport association (IATA), and Health Level
Seven (HL7). From the text of the announcement: "DataDirect XML
Converters plug into the DataDirect XQuery product, an enterprise-grade
XQuery processor that supports the W3C standard and allows data to be
easily transformed, aggregated and enriched -- providing seamless
integration for all data formats supported by the DataDirect XML
Converters. DataDirect XQuery version 3.1 includes expanded database
support for both the Enterprise and Community edition of MySQL server
and full update support for relational data including Oracle 11g,
Informix, and PostgreSQL. Featuring extended file type support and
output enhancements, DataDirect XQuery version 3.1 further simplifies
the performance of combining and processing heterogeneous data sources.
The product now supports the ability to query new office document
standards like OpenDocument Format, Office Open XML and XML-based
versions of PDF." More Information See also the announcement: Click Here
version 3.1 of the XML converter and XQuery engine at the XML 2007
conference and exposition in Boston. According to DataDirect, the
new version of XML converters for Java and .NET provide bi-directional,
programmatic access to non-XML files, including electronic data
interchange (EDI), flat files and other legacy formats. It also
offers API support to dynamically fetch the XML schema for conversion
and standard exchange format (SEF) support for custom EDI needs.
The company said that the converters also support B2B integration
standards such as X12, EDI for administration commerce and transport,
international air transport association (IATA), and Health Level
Seven (HL7). From the text of the announcement: "DataDirect XML
Converters plug into the DataDirect XQuery product, an enterprise-grade
XQuery processor that supports the W3C standard and allows data to be
easily transformed, aggregated and enriched -- providing seamless
integration for all data formats supported by the DataDirect XML
Converters. DataDirect XQuery version 3.1 includes expanded database
support for both the Enterprise and Community edition of MySQL server
and full update support for relational data including Oracle 11g,
Informix, and PostgreSQL. Featuring extended file type support and
output enhancements, DataDirect XQuery version 3.1 further simplifies
the performance of combining and processing heterogeneous data sources.
The product now supports the ability to query new office document
standards like OpenDocument Format, Office Open XML and XML-based
versions of PDF." More Information See also the announcement: Click Here
Subscribe to:
Posts (Atom)