Search This Blog

Friday, August 13, 2010

Cloud Computing, SOA and Windows Azure

"The Windows Azure platform is an Internet-scale cloud computing
services platform hosted in Microsoft data centers. Windows tools
provide functionality to build solutions that include a cloud services
operating system and a set of developer services. The key parts of the
Windows Azure platform are: Windows Azure -- application container,
Microsoft SQL Azure, and Windows Azure platform AppFabric

The Windows Azure platform is part of the Microsoft cloud, which
consists of multiple categories of services: (1) Cloud-based
applications: These are services that are always available and highly
scalable. They run in the Microsoft cloud that consumers can directly
utilize. Examples include Bing, Windows Live Hotmail, Office.
(2) Software services: These services are hosted instances of
Microsoft's enterprise server products that consumers can use directly.
Examples include Exchange Online, SharePoint Online, Office
Communications Online, etc. (3) Platform services: This is where the
Windows Azure platform itself is positioned. It serves as an application
platform public cloud that developers can use to deploy next-generation,
Internet-scale, and always available solutions. (4) Infrastructure
services: There is a limited set of elements of the Windows Azure
platform that can support cloud-based infrastructure resources.

SQL Azure is a cloud-based relational database service built on SQL
Server technologies that exposes a fault-tolerant, scalable, and
multi-tenant database service. SQL Azure does not exist as hosted
instances of SQL Server. It also uses a cloud fabric layer to abstract
and encapsulate the underlying technologies required for provisioning,
server administration, patching, health monitoring, and lifecycle
management.

Summary of Key Points: (1) The Windows Azure platform is primarily a
PaaS deployed in a public cloud managed by Microsoft. (2) Windows Azure
platform provides a distinct set of capabilities suitable for building
scalable and reliable cloud-based services. (3) The overall Windows
Azure platform further encompasses SQL Azure and Windows Azure platform
AppFabric." More Info See also XML in Clinical Research and Healthcare Industries:

Computers in Patient Care: The Promise and the Challenge

"Why is it that in terms of automating medical information, we are
still attempting to implement concepts that are decades old? With all
of the computerization of so many aspects of our daily lives, medical
informatics has had limited impact on day-to-day patient care. We have
witnessed slow progress in using technology to gather, process, and
disseminate patient information, to guide medical practitioners in
their provision of care and to couple them to appropriate medical
information for their patients' care...

The first challenge in applying medical informatics to the daily
practice of care is to decide how computerization can help patient care
and to determine the necessary steps to achieve that goal. Several
other early attempts were made to apply computerization to health
care. Most were mainframe-based, driving 'dumb' terminals. Many dealt
only with the low-hanging fruit of patient order entry and results
reporting, with little or no additional clinical data entry. Also,
many systems did not attempt to interface with the information
originator (e.g., physician) but rather delegated the system use to
a hospital ward clerk or nurse, thereby negating the possibility of
providing medical guidance to the physician, such as a warning about
the dangers of using a specific drug.

We have made significant technological advances that solve many of
these early shortcomings. Availability of mass storage is no longer a
significant issue. Starting with a 7-MB-per-freezer-size-disk drive
(which was not very reliable), we now have enterprise storage systems
providing extremely large amounts of storage for less than $1 per
gigabyte, and they don't take up an entire room. This advance in
storage has been accompanied by a concomitant series of advances in
file structures, database design, and database maintenance utilities,
greatly simplifying and accelerating data access and maintenance.
[But] if we truly want to develop an information utility for
health-care delivery in an acute care setting (such as an intensive
care unit or emergency department), we need to strive for overall
system reliability at least on the order of our electric power grid...

One significant issue is the balkanization of medical computerization.
Historically, there has been little appreciation of the need for an
overall system. Instead we have a proliferation of systems that do
not integrate well with each other. For example, a patient who is
cared for in my emergency department may have his/her data spread
across nine different systems during a single visit, with varying
degrees of integration and communication among these systems: EDIS
(emergency department information system), prehospital care (ambulance)
documentation system, the hospital ADT (admission/discharge/transfer)
system, computerized clinical laboratory system, electronic data
management (medical records) imaging system, hospital pharmacy system,
vital-signs monitoring system, hospital radiology ordering system,
and PACS system...." More Info See also XML in Clinical Research and Healthcare Industries:

IETF Approves Symmetric Key Package Content Type Specification

The Internet Engineering Steering Group (IESG) has announced approval
of the "Symmetric Key Package Content Type" Specification as an IETF
Proposed Standard. Hannes Tschofenig is the document shepherd for this
document, and Tim Polk is the IETF Responsible Area Director. The
specification was produced by members of the IETF Provisioning of
Symmetric Keys (KEYPROV) Working Group.

"This document provides the ASN.1 variant of the Portable Symmetric Key
Container (PSKC), which is defined using XML in the I-D 'Portable
Symmetric Key Container (PSKC)' The symmetric key container defines a
transport independent mechanism for one or more symmetric keys as well
as any associated attributes. The container by itself is insecure; it
can be secured using either the Dynamic Symmetric Key Provisioning
Protocol (DSKPP) or a CMS protecting content types, per RFC 5652. In
addition to the key container, this document also defines ASN.1 version
of the XML elements and attributes defined in PSKC.

Working Group Summary: The WG agreed that this container would be the
optional container, but there was a contingent (both in the WG and in
the IEEE) that wanted the ASN.1 container. The format for the container
has been stable since version -02. The ASN.1 converted XML elements
and attributes were added in the last version to ensure alignment with
PSKC.

Document Quality: The text of this document is derived from the XML
elements and attributes defined in draft-ietf-keyprov-pskc. As such,
this document represents the ASN.1 based version of the XML-based
counterpart. More Info See also the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group:

Building an AtomPub Server Using WCF Data Services

OData (odata.org) builds on the HTTP-based goodness of Atom for
publishing data; AtomPub for creating, updating and deleting data;
and the Microsoft Entity Data Model (EDM) for defining the types of
data.

If you have a JavaScript client, you can get the data back directly in
JSON instead of Atom format, and if you've got something else --
including Excel, the .Microsoft NET Framework, PHP, AJAX and more --
there are client libraries for forming OData requests and consuming
OData responses.

If you're using the .NET Framework on the server side, Microsoft also
provides an easy-to-use library called WCF Data Services for exposing
.NET Framework types or databases supported by the Microsoft Entity
Framework as OData sources. This makes it easy to expose your data
over the Internet in an HTTP- and standards-based way.

[However] there are some things that you might like to do with OData
that aren't quite part of the out-of-box experience, such as integrating
OData with existing Atom- and AtomPub-based readers and writers..." More Info

Computing Cloud Seen as Answer for Consolidated Audit Trail

"FTEN, a supplier of risk management software to bulge bracket firms on
Wall Street has proposed that the Securities and Exchange Commission
rely on real-time data stored in a nationwide cloud of computing power
and networks to create an effective audit trail of stock market activity.

FTEN provides risk management, routing, surveillance, compliance and
market data services to market participants. The firm proposed in a
letter to the SEC look to already deployed and commercially available
systems that capture order and execution data in real-time from stock
exchanges, electronic communication networks, alternative trading systems
and dark pools to start creating the trail.

The data from all markets then could be mapped back to a unified
format that would create a normalized set of data that regulators
could review in real time for signs of market disruptions or abuse...

Ted Myerson, FTEN CEO said FTEN's commercially deployed At-Trade secure
data cloud already aggregages data from 50 sources, with a wide variety
of symbol directories, unifies it into a common format and feeds it back
to private firms... FTEN says it provides real-time risk management and
surveillance on as many as 17 billion shares of stock a day in the
United States. That, it says, equates to risk calculations involving
$150 billion worth of shares a day... FTEN did not put a price tag on
what it would take the securities industry to build out a consolidated
audit trail system based on its At-Trade cloud of compute power and
online data..." More Info

The Arrival of HTML 5: Lots of New Features, All Eagerly Awaited

"HTML (Hyper Text Markup Language) is one of the underpinnings
technologies of the modern web with the lion's share of web users'
Internet activities founded on it. HTML now stands on the brink of
the next change -- the coming of HTML 5. At present, the Internet
already contains a handful of HTML 5 specification outlines which
partially cover HTML 5 features and conceptions. In this article, we
review the current state of HTML and describe the most significant
HTML 5 innovations.

Offline Potential: Some time ago, a new specification for client-side
database support with interesting applications was introduced. While
this feature had vast potential, it has been excluded from current
specification drafts due to insufficient interest from vendors which
use various SQL back-ends. As such, the only offline feature currently
available in HTML 5 is flexible online/offline resources management
using cache manifests. Cache manifests allow an author of a document
to specify which referenced resources must be cached in browser data
store (e.g., static images, external CSS and JavaScript files) and
which must be retrieved from a server (e.g., time-sensitive data like
stock price graphs, responses from web services invoked from within
JavaScript). The manifest also provides means for specifying fallback
offline replacements for resources which must not be cached. This
mechanism gives the ability to compose HTML documents which can be
viewed offline.

REST in Forms: REST application can be characterized by a clear
separation between clients and servers, stateless communications with
the server (no client context is stored on the server between requests)
and uniform client-server protocol that can be easily invoked from other
clients. Applied to HTTP, it encourages usage of URI for identifying
all entities and standard HTTP methods like GET (retrieve), POST (change),
PUT (add) and DELETE (remove) for entity operations. HTML 5 now fully
supports issuing PUT and DELETE requests from HTML forms without any
workarounds. This is an unobtrusive, but ideologically important
innovation which brings more elegance into web architecture and simplifies
development of HTML UI for REST services.

Communicating Documents: Now documents opened in browsers can exchange
data using messages. Such data exchange may be useful on a web page
that includes several frames with the data loaded from different origins.
Usually, a browser does not allow JavaScript code to access/manipulate
the objects of other documents opened from a different origin. This is
done to prevent cross-site scripting and other malicious and destructive
endeavors..." More Info See also HTML5 differences from HTML4:
Members of the W3C Device APIs and Policy Working Group have published
a First Public Working Draft for "The Messaging API". The WG was
chartered to create client-side APIs that enable the development of Web
Applications and Web Widgets that interact with devices services such
as Calendar, Contacts, Camera... This document "represents the early
consensus of the group on the scope and features of the proposed
Messaging API; in particular, the group intends to work on messages
management (move, delete, copy, etc.) in a separate specification.
Issues and editors note in the document highlight some of the points
on which the group is still working and would particularly like to
receive feedback.

The Messaging API specification defines a high-level interface to
Messaging functionality, including SMS, MMS and Email. It includes
APIs to create, send and receive messages. The specification does not
replace RFCs for Mail or SMS URLs, but includes complementary
functionality to these.

Security: The API defined in this specification can be used to create
and subscribe for incoming messages through different technologies.
Sending messages usually have a cost associated to them, especially
SMSs and MMSs. Furthermore this cost may depend on the message attributes
(e.g. destination address) or external conditions (e.g. roaming status).
Apart from billing implications, there are also privacy considerations
due to the capability to access message contents. A conforming
implementation of this specification must provide a mechanism that
protects the user's privacy and this mechanism should ensure that no
message is sent or no subscription is establisehd without the user's
express permission.

A user agent must not send messages or subscribe for incoming ones
without the express permission of the user. A user agent must acquire
permission through a user interface, unless they have prearranged
trust relationships with users, as described below. The user interface
must include the URI of the document origin, as defined in HTML 5... A
user agent may have prearranged trust relationships that do not require
such user interfaces. For example, while a Web browser will present a
user interface when a Web site request an SMS subscription, a Widget
Runtime may have a prearranged, delegated security relationship with
the user and, as such, a suitable alternative security and privacy
mechanism with which to authorize that operation...." More Infor

Thursday, March 18, 2010

Public Data: Translating Existing Models to RDF

"As we encourage linked data adoption within the UK public sector,something we run into again and again is that (unsurprisingly) particulardomain areas have pre-existing standard ways of thinking about the datathat they care about. There are existing models, often with multipleserialisations, such as in XML and a text-based form, that are supportedby existing tool chains. In contrast, if there is existing RDF in thatdomain area, it's usually been designed by people who are more interestedin the RDF than in the domain area, and is thus generally more focusedon the goals of the typical casual data re-user rather than theprofessionals in the area...
To give an example, the international statistics community uses SDMXfor representing and exchanging statistics... SDMX includes a well-thoughtthrough model for statistical datasets and the observations within them,as well as standard concepts for things like gender, age, unit multipliersand so on. By comparison, SCOVO, the main RDF model for representingstatistics, barely scratches the surface in comparison. This isn't theonly example: the INSPIRE Directive defines how geographic informationmust be made available. GEMINI defines the kind of geospatial metadatathat that community cares about. The Open Provenance Model is the resultof many contributors from multiple fields, and again has a number ofserialisations.
You could view this as a challenge: experts in their domains already havemodels and serialisations for the data that they care about; how can wepersuade them to adopt an RDF model and serialisations instead? Butthat's totally the wrong question. Linked data doesn't, can't and won'treplace existing ways of handling data. The question is really abouthow to enable people to reap these benefits; the answer, becauseHTTP-based addressing and typed linkage is usually hard to introduceinto existing formats, is usually to publish data using an RDF-basedmodel alongside existing formats. This might be done by generating anRDF-based format (such as RDF/XML or Turtle) as an alternative to thestandard XML or HTML, accessible via content negotiation, or byproviding a GRDDL transformation that maps an XML format into RDF/XML...
Modelling is a complex design activity, and you're best off avoidingdoing it if you can. That means reusing conceptual models that have beenbuilt up for a domain as much as possible and reusing existing vocabularieswherever you can. But you can't and shouldn't try to avoid doing designwhen mapping from a conceptual model to a particular modelling paradigmsuch as a relational, object-oriented, XML or RDF model. If you'remapping to RDF, remember to take advantage of what it's good at suchas web-scale addressing and extensibility, and always bear in mind howeasy or difficult your data will be to query. There is no pointpublishing linked data if it is unusable..."
http://www.jenitennison.com/blog/node/142See also Linked Data: http://www.w3.org/standards/semanticweb/data

There is REST for the Weary Developer

This brief article provides an example of working with theRepresentational State Transfer style of software architecture. REST(Representational State Transfer) is a style of software architecturefor accessing information on the Web. The RESTful service refers toweb services as resources that use XML over the HTTP protocol. Theterm REST dates back to 2000, when Roy Fielding used it in his doctoraldissertation. The W3C recommends using WSDL 2.0 as the language fordefining REST web services. To explain REST, we take an example ofpurchasing items from a catalog application...
First we will define CRUD operations for this service as following. Theterm CRUD stands for basic database operations Create, Read, Update, andDelete. In the example, you can see that creating a new item with Idis not supported. When a request for new item is received, Id is createdand assigned to the new item. Also, we are not supporting the updateand delete operations for the collection of items. Update and delete aresupported for the individual items...
Interface documents: How does the client know what to expect in returnwhen it makes a call for CRUD operations? The answer is the interfacedocument. In this document you can define the CRUD operation mapping,Item.xsd file, and request and response XML. You can have separate XSDfor request and response, or response can have text such as 'success'in return for the methods other than GET...
There are other frameworks available for RESTful Services. Some of themare listed here: Sun reference implementation for JAX-RS code-namedJersey, where Jersey uses a HTTP web server called Grizzly, and theServlet Grizzly Servlet; Ruby on Rails; Restlet; Django; Axis2a.
http://www.devx.com/architect/Article/44341

Now IBM's Getting Serious About Public IaaS

James Staten, Forrester Blog

"IBM has been talking a good cloud game for the last year or so. Theyhave clearly demonstrated that they understand what cloud computingis, what customers want from it and have put forth a variety of offeringsand engagements to help customers head down this path -- mostly throughinternal cloud and strategic rightsourcing options.
But its public cloud efforts, outside of application hosting have beena bit of wait and see. Well the company is clearly getting its acttogether in the public cloud space with today's announcement of theSmart Business Development and Test Cloud, a credible public Infrastructureas a Service (IaaS) offering. This new service is an extension of itsdeveloperWorks platform and gives its users a virtual environment throughwhich they can assemble, integrate and validate new applications. Pricingon the service is as you would expect from an IaaS offering, and freefor a limited time...
Certainly any IaaS can be used for test and development purposes so IBMisn't breaking new ground here. But its off to a solid start with statedsupport from test and dev specialist partners SOASTA, VMLogix, AppFirstand Trinity Software bring their tools to the IBM test cloud..."
http://blogs.forrester.com/james_staten/10-03-16-now_ibm%E2%80%99s_getting_serious_about_public_iaasSee also Jeffrey Schwartz in GCN: http://gcn.com/articles/2010/03/17/ibm-public-cloud-service.aspx

Aggregative Digital Libraries: D-NET Software Toolkit and OAIster System

"Aggregative Digital Library Systems (ADLSs) provide end users with webportals to operate over an information space of descriptive metadatarecords, collected and aggregated from a pool of possibly heterogeneousrepositories. Due to the costs of software realization and systemmaintenance, existing "traditional" ADLS solutions are not easilysustainable over time for the supporting organizations. Recently, theDRIVER EC project proposed a new approach to ADLS construction, basedon Service-Oriented Infrastructures. The resulting D-NET software toolkitenables a running, distributed system in which one or multipleorganizations can collaboratively build and maintain theirservice-oriented ADLSs in a sustainable way. Aggregative Digital LibrarySystems (ADLSs) typically address two main challenges: (1) populating aninformation space of metadata records by harvesting and normalizingrecords from several OAI-PMH compatible repositories; and (2) providingportals to deliver the functionalities required by the user communityto operate over the aggregated information space, for example, search,annotations, recommendations, collections, user profiling, etc.
Repositories are defined here as software systems that typically offerfunctionalities for storing and accessing research publications andrelative metadata information. Access usually has the twofold form ofsearch through a web portal and bulk metadata retrieval through OAI-PMHinterfaces. In recent years, research institutions, university libraries,and other organizations have been increasingly setting up repositoryinstallations (based on technologies such as Fedora, ePrints, DSpace,Greenstone, OpenDlib, etc) to improve the impact and visibility of theiruser communities' research outcomes.
In this paper, we advocate that D-NET's 'infrastructural' approach toADLS realization and maintenance proves to be generally more sustainablethan 'traditional' ones. To demonstrate our thesis, we report on thesustainability of the 'traditional' OAIster System ADLS, based on DLXSsoftware (University of Michigan), and those of the 'infrastructural'DRIVER ADLS, based on D-NET.
As an exemplar of traditional solutions we rely on the well-known OAIsterSystem, whose technology was realized at the University of Michigan.The analysis will show that constructing static or evolving ADLSs usingD-NET can notably reduce software realization costs and that, forevolving requirements, refinement costs for maintenance can be mademore sustainable over time..."
http://www.dlib.org/dlib/march10/manghi/03manghi.html

Definitions for Expressing Standards Requirements in IANA Registries

The Internet Engineering Steering Group (IESG) has received a requestto consider the specification "Definitions for Expressing StandardsRequirements in IANA Registries" as a Best Current Practice RFC (BCP).The IESG plans to make a decision in the next few weeks, and solicitsfinal comments on this action; please send substantive comments to theIETF mailing lists by 2010-04-14.
Abstract: "RFC 2119 defines words that are used in IETF standardsdocuments to indicate standards compliance. These words are fine fordefining new protocols, but there are certain deficiencies in usingthem when it comes to protocol maintainability. Protocols are maintainedby either updating the core specifications or via changes in protocolregistries. For example, security functionality in protocols oftenrelies upon cryptographic algorithms that are defined in externaldocuments. Cryptographic algorithms have a limited life span, and newalgorithms regularly phased in to replace older algorithms. This documentproposes standard terms to use in protocol registries and possibly instandards track and informational documents to indicate the life cyclesupport of protocol features and operations.
The proposed requirement words for IANA protocol registries include thefollowing. (1) MANDATORY This is the strongest requirement and for animplementation to ignore it there MUST be a valid and serious reason.(2) DISCRETIONARY, for Implementations: Any implementation MAY or MAYNOT support this entry in the protocol registry. The presence oromission of this MUST NOT be used to judge implementations on standardscompliance (and for) Operations: Any use of this registry entry inoperation is supported, ignoring or rejecting requests using this protocolcomponent MUST NOT be used as bases for asserting lack of compliance.(3) OBSOLETE for Implementations means new implementations SHOULD NOTsupport this functionality, and for Operations, means any use of thisfunctionality in operation MUST be phased out. (4) ENCOURAGED: Thisword is added to the registry entry when new functionality is added andbefore it is safe to rely solely on it. Protocols that have the abilityto negotiate capabilities MAY NOT need this state. (5) DISCOURAGED meansthis requirement is placed on an existing function that is being phasedout. This is similar in spirit to both MUST- and SHOULD- as defined andused in certain RFC's such as RFC 4835. (6) RESERVED: Sometimes thereis a need to reserve certain values to avoid problems such as valuesthat have been used in implementations but were never formally registered.In other cases reserved values are magic numbers that may be used inthe future as escape valves if the number space becomes too small. (7)AVAILABLE is a value that can be allocated by IANA at any time..."
This document is motivated by the experiences of the editors in tryingto maintain registries for DNS and DNSSEC. For example, DNS defines aregistry for hash algorithms used for a message authentication schemecalled TSIG, the first entry in that registry was for HMAC-MD5. TheDNSEXT working group decided to try to decrease the number of algorithmslisted in the registry and add a column to the registry listing therequirements level for each one. Upon reading that HMAC-MD5 was taggedas 'OBSOLETE' a firestorm started. It was interpreted as the DNScommunity making a statement on the status of HMAC-MD5 for all uses.
http://xml.coverpages.org/draft-ogud-iana-protocol-maintenance-words-03.txtSee also 'Using MUST and SHOULD and MAY': http://www.ietf.org/tao.html#anchor42

New Models of Human Language to Support Mobile Conversational Systems

W3C has announced a Workshop on Conversational Applications: Use Casesand Requirements for New Models of Human Language to Support MobileConversational Systems. The workshop will be held June 18-19, 2010in New Jersey, US, hosted by Openstream. The main outcome of theworkshop will be the publication of a document that will serve as aguide for improving the W3C language model. W3C membership is notrequired to participate in this workshop. The current program committeeconsists of: Paolo Baggia (Loquendo), Daniel C. Burnett (Voxeo),Deborah Dahl (W3C Invited Expert), Kurt Fuqua (Cambridge Mobile),Richard Ishida (W3C), Michael Johnston (AT&T), James A. Larson (W3CInvited Expert), Sol Lerner (Nuance), David Nahamoo (IBM), Dave Raggett(W3C), Henry Thompson (W3C/University of Edinburgh), and Raj Tumuluri(Openstream).
"A number of developers of conversational voice applications feel thatthe model of human language currently supported by W3C standards suchas SRGS, SISR and PLS is not adequate and that developers need newcapabilities in order to support more sophisticated conversationalapplications. The goal of the workshop therefore is to understand thelimitations of the current W3C language model in order to develop amore comprehensive model. We plan to collect and analyze use cases andprioritize requirements that ultimately will be used to identifyimprovements to the W3C language model. Just as W3C developed SSML 1.1to broaden the languages for which SSML is useful, this effort willresult in improved support for language capabilities that areunsupported today.
Suggested Workshop topics for position papers include: (1) Use casesand requirements for grammar formalisms more powerful than SRGS'scontext free grammars that are needed to implement tomorrow'sapplications (2) What are the common aspects of human language modelsfor different languages that can be factored into reusable modules?(3) Use cases and requirements for realigning/extending SRGS, PLS andSISR to support more powerful human language models (4) Use cases andrequirements for sharing grammars among concurrent applications (5) Usecases that illustrate requirements for natural language capabilitiesfor conversational dialog systems that cannot easily be implementedusing the current W3C conversational language model. (6) Use cases andrequirements for speech-enabled applications that can be used acrossmultiple languages (English, German, Spanish, ...) with only minormodifications. (7) Use cases and requirements for composing thebehaviors of multiple speech-enabled applications that were developedindependently without requiring changes to the applications. (8) Usecases and requirements motivating the need to resolve ellipses andanaphoric references to previous utterances.
Position papers, due April 2, 2010, must describe requirements and usecases for improving W3C standards for conversational interaction andhow the use cases justify one or more of these topics: Formal notationsfor representing grammar in: Syntax, Morphology, Phonology, Prosodics;Engine standards for improvement in processing: Syntax, Morphology,Phonology, Lexicography; Lexicography standards for: parts-of-speech,grammatical features and polysemy; Formal semantic representation ofhuman language including: verbal tense, aspect, valency, plurality,pronouns, adverbs; Efficient data structures for binary representationand passing of: parse trees, alternate lexical/morphologic analysis,alternate phonologic analysis; Other suggested areas or improvementsfor standards based conversational systems development..."
http://www.w3.org/2010/02/convapps/cfpSee also W3C Workshops: http://www.w3.org/2003/08/Workshops/

Integrating Composite Applications on the Cloud Using SCA

"Elastic computing has made it possible for organizations to use cloudcomputing and a minimum of computing resources to build and deploy anew generation of applications. Using the capabilities provided bythe cloud, enterprises can quickly create hybrid composite applicationson the cloud using the best practices of service-component architectures(SCA).
Since SCA promotes all the best practices used in service-orientedarchitectures (SOA), building composite applications using SCA is oneof the best guidelines for creating cloud-based composite applications.Applications created using several different runtimes running on thecloud can be leveraged to create a new component , as well as hybridcomposite applications which scale on-demand with private/public cloudmodels can also be built using secure transport data channels.
In this article, we show how to build and integrate composite applicationsusing Apache Tuscany, the Eucalyptus open source cloud framework, andOpenVPN to create a hybrid composite application. To show that distributedapplications comprising of composite modules (distributed across thecloud and enterprise infrastructure) can be integrated and function asa single unit using SCA without compromising on security, we create acomposite application that components spread over different domainsdistributed across the cloud and the enterprise infrastructure. We thenuse SCA to host and integrate this composite application so that itfulfills the necessary functional requirements. To ensure informationand data security, we set up a virtual private network (VPN) betweenthe different domains (cloud and enterprise), creating a point-to-pointencrypted network which provides secure information exchange betweenthe two environments...
This project illustrates that distributed applications comprising ofcomposite modules (distributed across the cloud and EnterpriseInfrastructure) can be integrated and made to function as a single unitusing Service Component Architecture (SCA) without compromising onsecurity..."
http://www.drdobbs.com/web-development/223800269

IETF Update: Specification for a URI Template

A revised version of the IETF Standards Track Internet Draft "URI Template"has been published. From the abstract: "A URI Template is a compactsequence of characters for describing a range of Uniform ResourceIdentifiers through variable expansion. This specification defines theURI Template syntax and the process for expanding a URI Template intoa URI, along with guidelines for the use of URI Templates on the Internet.
Overview: "A Uniform Resource Identifier (URI) is often used to identifya specific resource within a common space of similar resources... URITemplates provide a mechanism for abstracting a space of resourceidentifiers such that the variable parts can be easily identified anddescribed. URI templates can have many uses, including discovery ofavailable services, configuring resource mappings, defining computed links,specifying interfaces, and other forms of programmatic interaction withresources.
A URI Template provides both a structural description of a URI space and,when variable values are provided, a simple instruction on how to constructa URI corresponding to those values. A URI Template is transformed intoa URI-reference by replacing each delimited expression with its value asdefined by the expression type and the values of variables named withinthe expression. The expression types range from simple value expansionto multiple key=value lists. The expansions are based on the URI genericsyntax, allowing an implementation to process any URI Template withoutknowing the scheme-specific requirements of every possible resulting URI.
A URI Template may be provided in absolute form, as in the examples above,or in relative form if a suitable base URI is defined... A URI Templateis also an IRI template, and the result of template processing can berendered as an IRI by transforming the pct-encoded sequences to theircorresponding Unicode character if the character is not in the reservedset... Parsing a valid URI Template expression does not require buildinga parser from the given ABNF. Instead, the set of allowed characters ineach part of URI Template expression has been chosen to avoid complexparsing, and breaking an expression into its component parts can beachieved by a series of splits of the character string. Example Pythoncode [is planned] that parses a URI Template expression and returns theoperator, argument, and variables as a tuple..."
http://xml.coverpages.org/draft-gregorio-uritemplate-04.txt

What Standardization Will Mean For Ruby

Mirko Stocker, InfoQueue
Ruby's inventor Matz announced plans to standardize Ruby in order to"improve the compatibility between different Ruby implementations [..]and to ease Ruby's way into the Japanese government". The firstproposal for standardization will be to the Japanese IndustrialStandards Committee and in a further step to the ISO, to become aninternational standard. For now, a first draft (that weighs in at over300 pages) and official announcement are available. Alternatively,there's a wiki under development to make the standard available inHTML format.A very different approach to unite Ruby implementations is theRubySpec project -- a community driven effort to build an executablespecification. RubySpec is an offspring of the Rubinius project...[But] What do our readers think: will it be easier to introduce Rubyin their organizations if there's an ISO standard behind it?"According to RubySpec lead Brian Ford: "I think the ISO Standardizationeffort is very important for Ruby, both for the language and for thecommunity, which in my mind includes the Ruby programmers, people whouse software written in Ruby, and the increasing number of businessesbased on or using software written in Ruby. The Standardization documentand RubySpec are complementary in my view. The document places primaryimportance on describing Ruby in prose with appropriate formattingformalities. The document envisions essentially one definition of Ruby.RubySpec, in contrast, places primary importance on code that demonstratesthe behavior of Ruby. However, RubySpec also emphasizes describing Rubyin prose as an essential element of the executable specification and isthe reason we use RSpec-compatible syntax. RubySpec also attempts tocapture the behavior of the union of all Ruby implementations. Itprovides execution guards that document the specs for differences betweenimplementations. For example, not all platforms used to implement Rubysupport forking a process. So the specs have guards for whichimplementations provide that feature... This illustrates an importantdifference between the ISO Standardization document and RubySpec. TheISO document can simply state that a particular aspect of the languageis "implementation defined" and provide no further guidance. Unfortunately,implementing such a standard can be difficult, as we have seen withthe confusion caused by various browser vendors attempting to implementCSS. RubySpec attempts to squeeze the total number of unspecified Rubybehaviors to the smallest size possible..."http://www.infoq.com/news/2010/03/ruby-standardizationSee also the Ruby Standard Wiki: http://wiki.ruby-standard.org/wiki/Main_Page

New Release of Oxygen XML Editor and Oxygen XML Author Supports DITA

Developers of the Oxygen XML Editor and Author toolsuite have announcedthe immediate availability of version 11.2 of the XML Editor and XMLAuthor containing a comprehensive set of tools supporting all the XMLrelated technologies. Oxygen combines content author features like theCSS driven Visual XML editor with a fully featured XML developmentenvironment. It has ready to use support for the main document frameworksDITA, DocBook, TEI and XHTML and also includes support for all XML Schemalanguages, XSLT/XQuery Debuggers, WSDL analyzer, XML Databases, XMLDiff and Merge, Subversion client and more.New features in version 11.2: Version 11.2 of Oxygen XML Editor improvesthe XML authoring, the XML development tools, the support for largedocuments and the SVN Client. The visual XML editing (Author mode) isavailable now as a separate component that can be integrated in Javaapplications or, as an Applet, in Web applications. A sample Webapplication showing the Author component in the browser, as an Applet,editing DITA documents is available...Other XML Author improvements include support for preserving theformatting for unchanged elements and an updated Author API containinga number of new extensions that allow customizing the Outline, theBreadcrumb and the Status Bar. The XSLT Debugger provides more flexibilityand it is the first debugger that can step inside XPath 2.0 expressions.The Saxon 9 EE bundled with Oxygen can be used to run XQuery 1.1transformations. The XProc support was aligned with the recent updateas W3C Proposed Recommendation and includes the latest Calabash XProcprocessor.In 'Author for DITA' there is support for Reusable Components: A fragmentof a topic can be extracted in a separate file for reuse in differenttopics. The component can be reused by inserting an element with a conrefattribute where the content of the component is needed. This works withoutany additional configuration and supports any DITA specialization.Similarly, there's support for Content References Management: The DITAframework includes actions for adding, editing and removing a contentreference (conref, conkeyref, conrefend attributes) to/from an existingelement... A new schema caching mechanism allows to quickly open largeDITA Maps and their referred topics..."http://www.oxygenxml.com/index.html#new-versionSee also XML Author Component for the DITA Documentation Framework: http://www.oxygenxml.com/demo/AuthorDemoApplet/author-component-dita.html

HTML5, Hardware Accelerated: First IE9 Platform Preview Available

Dean Hachamovitch, Windows Internet Explorer WeblogAt the Las Vegas MIX10 Conference, Microsoft Internet Explorerdevelopers demonstrated "how the standard web patterns that developersalready know and use broadly run better by taking advantage of PChardware through IE9 on Windows." A blog article by Dean Hachamovitchprovides an overview of what we showed, "across performance, standards,hardware-accelerated HTML5 graphics, and the availability of the IE9Platform Preview for developers...First, we showed IE9's new script engine, internally known as 'Chakra,'and the progress we've made on an industry benchmark for JavaScriptperformance... We showed our progress in making the same standards-basedHTML, script, and formatting markup work across different browsers.We shared the data and framework that informed our approach, anddemonstrated better support for several standards: HTML5, DOM, andCSS3. We showed IE9's latest Acid3 score (55); as we make progress onthe industry goal of having the same markup that developers actuallyuse working across browsers, our Acid3 score will continue to go up...In several demonstrations, we showed the significant performance gainsthat graphically rich, interactive web pages enjoy when a browser takesfull advantage of the PC's hardware capabilities through the operatingsystem. The same HTML, script, and CSS markup work across severaldifferent browsers; the pages just run significantly faster in IE9because of hardware-accelerated graphics. IE9 is also the first browserto provide hardware-accelerated SVG support...The goal of standardsand interoperability is that the same HTML, script, and formattingmarkup work the same across different browsers. Eliminating the needfor different code paths for different browsers benefits everyone,and creates more opportunity for developers to innovate.The main technologies to call out here broadly are HTML5, CSS3, DOM,and SVG. The IE9 test drive site has more specifics and samples. Atthis time, we're looking for developer feedback on our implementationof HTML5's parsing rules, Selection APIs, XHTML support, and inlineSVG. Within CSS3, we're looking for developer feedback on IE9's supportfor Selectors, Namespaces, Colors, Values, Backgrounds and Borders,and Fonts. Within DOM, we're looking for developer feedback on IE9'ssupport for Core, Events, Style, and Range... As IE makes more progresson the industry goal of 'same markup' for standards and parts ofstandards that developers actually use, the Acid3 score will continueto go up as a result. A key part of our approach to web standards isthe development of an industry standard test suite. Today, Microsofthas submitted over 100 additional tests of HTML5, CSS3, DOM, and SVGto the W3C..."http://preview.tinyurl.com/ykceeexSee also Paul Krill's InfoWorld article: http://www.infoworld.com/d/applications/microsoft-embraces-html5-specification-in-ie9-861

Open Source of ebMS V3 Message Handler and AS4 Profile on Sourceforge

Holodeck is an open source version of ebXML Messaging Version 3 andits AS4 profile is now available on Sourceforge with onlinedocumentation. The ebXML Messaging V3 specification defines acommunications-protocol neutral method for exchanging electronicbusiness messages. It defines specific Web Services-based envelopingconstructs supporting reliable, secure delivery of business information.Furthermore, the specification defines a flexible enveloping technique,permitting messages to contain payloads of any format type...The OASIS specification "AS4 Profile of ebMS V3" abstract: "While ebMS3.0 represents a leap forward in reducing the complexity of Web ServicesB2B messaging, the specification still contains numerous options andcomprehensive alternatives for addressing a variety of scenarios forexchanging data over a Web Services platform. The AS4 profile of theebMS 3.0 specification has been developed in order to bring continuityto the principles and simplicity that made AS2 successful, whileadding better compliance to Web services standards, and features suchas message pulling capability and a built-in Receipt mechanism. UsingebMS 3.0 as a base, a subset of functionality is defined along withimplementation guidelines adopted based on the 'just-enough' designprinciples and AS2 functional requirements to trim down ebMS 3.0 intoa more simplified and AS2-like specification for Web Services B2Bmessaging. This document defines the AS4 profile as a combination ofa conformance profile that concerns an implementation capability, andof a usage profile that concerns how to use this implementation. Acouple of variants are defined for the AS4 conformance profile -- theAS4 ebHandler profile and the AS4 Light Client profile -- that reflectdifferent endpoint capabilities."Holodeck's primary goal is to provide an Open-Source product for B2Bmessaging based on ebXML Messaging version 3 that can be used by ebXMLcommunities as well as WebServices communities. Because ebXML Messagingversion 3 is compatible with webservices, Holodeck provides anintegration of ebXML, webservices and AS4 in one package. Holodeckcan be used in the following scenarios: (1) Pure ebXML messaging inthe B2B or within different departments of the same company. (2)Messaging Gateway to an ESB. The ESB providing an integration withina company, while Holodeck playing the gateway to communicate with theexternal world via messaging. (3) An environment where there is a needfor both Webservice consumption and heavy B2B messaging where webservices fail...Holodeck comes with a scalable architecture: datastore for messages(JDO by default, a MySQL pre-configured option, and interfaces toother databases), and streaming for large messages (based on Axis2streaming). The project is funded and maintained by Fujitsu America,Inc. This package comes with a "no coding necessary" out-of-the-boxexperience and tutorials, allowing you to deploy and test withouthaving to write code up-front, using a directory system as applicationlayer substitute to store as files elements of messages to be sent,and to receive them. Developers can download binaries and source code,and get a fresh copy directly from "Subversion" versioning system...http://ebxml.xml.org/news/open-source-of-ebms-v3-message-handler-and-its-as4-profile-on-sourceforgeSee also the Holodeck resources from SourceForge: http://holodeck-b2b.sourceforge.net/

IESG Issues Last Call Review for MODS/MADS/METS/MARCXML/SRU Media Types

The Internet Engineering Steering Group (IESG) has received a requestfrom an individual submitter the following Standards Track I-D as anIETF Proposed Standard: "The Media Types application/mods+xml,application/mads+xml, application/mets+xml, application/marcxml+xml,application/sru+xml." The IESG plans to make a decision in the nextfew weeks, and solicits final comments on this action; please sendsubstantive comments to the IETF lists by 2010-04-12.This document "specifies Media Types for the following formats: MODS(Metadata Object Description Schema), MADS (Metadata AuthorityDescription Schema), METS (Metadata Encoding and Transmission Standard),MARCXML (MARC21 XML Schema), and the SRU (Search/Retrieve via URLResponse Format) Protocol response XML schema. These are all XMLschemas providing representations of various forms of informationincluding metadata and search results.The U.S. Library of Congress, on behalf of and in collaboration withvarious components of the metadata and information retrieval community,has issued specifications which define formats for representation ofvarious forms of information including metadata and search results.This memo provides information about the Media Types associated withseveral of these formats, all of which are XML schemas. (1) 'MODS:Metadata Object Description Schema' is an XML schema for a bibliographicelement set that may be used for a variety of purposes, and particularlyfor library applications. (2) 'MADS: Metadata Authority DescriptionSchema' is an XML schema for an authority element set used to providemetadata about agents (people, organizations), events, and terms(topics, geographics, genres, etc.). It is a companion to the MODSSchema. (3) 'METS: Metadata Encoding and Transmission Standard" definesan XML schema for encoding descriptive, administrative, and structuralmetadata regarding objects within a digital library.(4) 'MARCXML MARC21 XML Schema' is an XML schema for the direct XMLrepresentation of the MARC format (for which there already exists amedia type, application/marc; By 'direct XML representation'is is meantthat it encodes the actual MARC data within XML... (5) 'SRU: Search/Retrieve via URL Response Format' provides an XML schema for the SRUresponse. SRU is a protocol, and the media type 'sru+xml' pertainsspecifically to the default SRU response. the SRU response may besupplied in any of a number of suitable schemas, RSS, ATOM, for example,and the client identifies the desired format in the request, hence theneed for a media type. This mechanism will be introduced in SRU 2.0;in previous versions (that is, all versions to date; 2.0 is indevelopment) all responses are supplied in the existing default format,so no media type was necessary. SRU 2.0 is being developed within OASIS.http://xml.coverpages.org/draft-denenberg-mods-etc-media-types-01.txtSee also IANA registration for MIME Media Types: http://www.iana.org/assignments/media-types/

OASIS SCA-C-C++ Technical Committee Publishes Two Public Review Drafts

Bryan Aupperle, David Haney, Pete Robbins (eds), OASIS Review DraftsMembers of the OASIS Service Component Architecture / C and C++(SCA-C-C++) Technical Committee have released two Committee Drafts forpublic review through March 25, 2010. This TC is part of the OASISOpen Composite Services Architecture (Open CSA) Member Section advancesopen standards that simplify SOA application development. Open CSAbrings together vendors and users from around the world to collaborateon standard ways to unify services regardless of programming languageor deployment platform. Open CSA promotes the further development andadoption of the Service Component Architecture (SCA) and Service DataObjects (SDO) families of specifications. SCA helps organizations moreeasily design and transform IT assets into reusable services that canbe rapidly assembled to meet changing business requirements. SDO letsapplication programmers uniformly access and manipulate data fromheterogeneous sources, including relational databases, XML data sources,Web services, and enterprise information systems."Service Component Architecture Client and Implementation Model for C++Specification Version 1.1" describes "the SCA Client and ImplementationModel for the C++ programming language. The SCA C++ implementationmodel describes how to implement SCA components in C++. A componentimplementation itself can also be a client to other services providedby other components or external services. The document describes howa C++ implemented component gets access to services and calls theiroperations. Thisdocument also explains how non-SCA C++ components canbe clients to services provided by other components or external services.The document shows how those non-SCA C++ component implementationsaccess services and call their operations.""Service Component Architecture Client and Implementation Model for CSpecification Version 1.1" describes "the SCA Client and ImplementationModel for the C programming language. The SCA C implementation modeldescribes how to implement SCA components in C. A componentimplementation itself can also be a client to other services providedby other components or external services. The document describes howa component implemented in C gets access to services and calls theiroperations. The document also explains how non-SCA C components canbe clients to services provided by other components or externalservices. The document shows how those non-SCA C componentimplementations access services and call their operations."The OASIS SCA-C-C++ TC is developing "the C and C++ programming modelfor clients and component implementations using the Service ComponentArchitectire (SCA). SCA defines a model for the creation of businesssolutions using a Service-Oriented Architecture, based on the conceptof Service Components which offer services and which make referencesto other services. SCA models business solutions as compositions ofgroups of service components, wired together in a configuration thatsatisfies the business goals. SCA applies aspects such as communicationmethods and policies for infrastructure capabilities such as securityand transactions through metadata attached to the compositions."http://docs.oasis-open.org/opencsa/sca-c-cpp/sca-cppcni-1.1-spec-cd05.htmlSee also the Model for C specification: http://docs.oasis-open.org/opencsa/sca-c-cpp/sca-ccni-1.1-spec-cd05.html

Early Draft Review for JSR-310 Specification: Date and Time API

Stephen Colebourne, Michael Nascimento Santos (et al, eds), JSR DraftProject editors for Java Specification Request 310: Date and Time APIhave published an Early Draft Review (EDR) to to gain feedback on anearly version of the JSR. The contents of the EDR are the prosespecification and the javadoc. According to the original publishedRequest, JSR 310 "will provide a new and improved date and time API forJava. The main goal is to build upon the lessons learned from the firsttwo APIs (Date and Calendar) in Java SE, providing a more advanced andcomprehensive model for date and time manipulation.The new API will be targeted at all applications needing a data modelfor dates and times. This model will go beyond classes to replace Dateand Calendar, to include representations of date without time, timewithout date, durations and intervals. This will raise the quality ofapplication code. For example, instead of using an int to store aduration, and javadoc to describe it as being a number of days, thedate and time model will provide a class defining it unambiguously.The new API will also tackle related date and time issues. These includeformatting and parsing, taking into account the ISO8601 standard andits implementations, such as XML. In addition, the areas of serializationand persistence will be considered... In this specification model,dates and times are separated into two basic use cases: machine-scaleand human-scale. Machine-scale time represents the passage of timeusing a single, continually incrementing number. The rules thatdetermine how the scale is measured and communicated are typicallydefined by international scientific standards organisations. Human-scaletime represents the passage of time using a number of named fields,such as year, month, day, hour, minute and second. The rules thatdetermine how the fields work together are defined in a calendar system...From the specification introduction: "Many Java applications requirelogic to store and manipulate dates and times. At present, Java SEprovides a number of disparate APIs for this purpose, including Date,Calendar, SQL Date/Time/Timestamp and XML Duration/XMLGregorianCalendar.Unfortunately, these APIs are not all particularly well-designed andthey do not cover many use cases needed by developers. As an example,Java developers currently have no standard Java SE class to representthe concept of a date without a time, a time without a date or aduration. The result of these missing features has been widespreadabuse of the facilities which are provided, such as using the Date orCalendar class with the time set to midnight to represent a datewithout a time. Such an approach is very error-prone - there arecertain time zones where midnight doesn't exist once a year due tothe daylight saving time cutover. JSR-310 tackles this by providinga comprehensive set of date and time classes suitable for Java SEtoday. The specification includes: Date and Time; Date without Time;Time without Date; Offset from UTC; Time Zone; Durations; Periods;Formatting and Parsing; A selection of calendar systems...Design Goals for JSR-310: (1) Immutable - The JSR-310 classes shouldbe immutable wherever possible. Experience over time has shown thatAPIs at this level should consist of simple immutable objects. Theseare simple to use, can be easily shared, are inherently thread-safe,friendly to the garbage collector and tend to have fewer bugs due tothe limited state-space. (2) Fluent API - The API strives to be fluentwithin the standard patterns of Java SE. A fluent API has methodsthat are easy to read and understand, specifically when chainedtogether. The key goal here is to simplify the use and enhance thereadability of the API. (3) Clear, explicit and expected - Eachmethod in the API should be well-defined and clear in what it does.This isn't just a question of good javadoc, but also of ensuring thatthe method can be called in isolation successfully and meaningfully.(4) Extensible - The API should be extensible in well defined waysby application developers, not just JSR authors. The reasoning issimple - there are just far too many weird and wonderful ways tomanipulate time. A JSR cannot capture all of them, but an extensibleJSR design can allow for them to be added as required by applicationdevelopers or open source projects..."http://wiki.java.net/bin/view/Projects/DateTimeEDR1See also the InfoQueue article by Alex Blewitt and Charles Humble: http://www.infoq.com/news/2010/03/jsr-310

W3C XML Security Working Group Releases Four Working Drafts for Review

Members of the W3C XML Security Working Group have published four WorkingDraft specifications for public review. This WG, along with the W3C WebSecurity Context Working Group, is part of the W3C XML Security Activity,and is chartered to to take the next step in developing the XML securityspecifications."XML Encryption Syntax and Processing Version 1.1" specifies "a processfor encrypting data and representing the result in XML. The data may bein a variety of formats, including octet streams and other unstructureddata, or structure data formats such as XML documents, an XML element,or XML element content. The result of encrypting data is an XML Encryptionelement which contains or references the cipher data""XML Security Algorithm Cross-Reference" is a W3C Note which "summarizesXML Security algorithm URI identifiers and the specifications associatedwith them. The various XML Security specifications have defined a numberof algorithms of various types, while allowing and expecting additionalalgorithms to be defined later. Over time, these identifiers have beendefined in a number of different specifications, including XML Signature,XML Encryption, RFCs and elsewhere. This makes it difficult for usersof the XML Security specifications to know whether and where a URI foran algorithm of interest has been defined, and can lead to the use ofincorrect URIs. The purpose of this Note is to collect the various knownURIs at the time of its publication and indicate the specifications inwhich they are defined in order to avoid confusion and errors... The noteindicates explicitly whether an algorithm is mandatory or recommended inother specifications. If nothing is said, then readers should assumethat support for the algorithms given is optional."The "XML Security Generic Hybrid Ciphers" Working Draft "augments XMLEncryption Version 1.1 by defining algorithms, XML types and elementsnecessary to enable use of generic hybrid ciphers in XML Securityapplications. Generic hybrid ciphers allow for a consistent treatmentof asymmetric ciphers when encrypting data and consist of a keyencapsulation algorithm with associated parameters and a dataencapsulation algorithm with associated parameters." Fourth, "XMLSecurity RELAX NG Schemas" serves to publish RELAX NG schemas for XMLSecurity specifications, including XML Signature 1.1 and XML SignatureProperties.http://www.w3.org/News/2010#entry-8749See also the W3C Web Security Context WG and XML Security WG: http://www.w3.org/Security/Activity

Wednesday, March 17, 2010

Document Format Standards and Patents

Alex Brown, Blog
This post is part of an ongoing series. It expands on Item 9 of 'ReformingStandardisation in JTC 1', which proposed Ten Recommendations for Reform,and Item 9 was "Clarify intellectual property policies: InternationalStandards must have clearly stated IP policies, and avoid unacceptablepatent encumbrances."
Historically, patents have been a fraught topic with an uneasy co-existencewith standards. Perhaps (within JTC 1) one of the most notorious recentexamples surrounded the JPEG Standard and, in part prompted by suchproblems there are certainly many people of good will wanting bettermanagement of IP in standards. Judging by some recent development indocument format standardisation, it seems probable that this will be thearea where progress can next be made...
The Myth of Unencumbered Technology: Given the situation we are evidentlyin, it is clear that no technology is safe. The brazen claims ofcorporations, the lack of diligence by the US Patent Office, and thecapriciousness of courts means that any technology, at any time, maysuddenly become patent encumbered. Technical people - being logical andreasonable - often make the mistake of thinking the system is bound bylogic and reason; they assume that because they can see 'obvious' priorart, then it will apply; however as the case of the i4i patent vividlyillustrates, this is simply not so.
While the "broken stack" of patents is beyond repair by any singlestandards body, at the very least the correct application of the rulescan make the situation for users of document format standards moretransparent and certain. In the interests of making progess in thisdirection, it seems a number of points need addressing now. (1) Usersshould be aware that the various covenants and promises being pointed-toby the US vendors need not be relevant to them as regards standards use.Done properly, International Standardization can give a clearer andstronger guarantee of license availability -- without the caveats,interpretable points and exit strategies these vendors' documentsinvariably have. (2) In particular it should be of concern to NBs thatthere is no entry in JTC 1's patent database for OOXML (there is forDIS 29500, its precursor text, a ZRAND promise from Microsoft); thereis no entry whatsoever for ODF... (3) In the case of the i4i patent,one implementer has already commented that implementing CustomXML inits entirety may run the risk of infringement -- and this is probably,after all, why Microsoft patched Word in the field to remove someaspects of its CustomXML support).... (4) When declaring their patentsto JTC 1, patent holders are given an option whether to make a generaldeclaration about the patents that apply to a standard, or to make aparticular declaration about each and every itemized patent whichapplies. I believe NBs should be insisting that patent holder enumerateprecisely the patents they hold which they claim apply.. There isobviously much to do, and I am hoping that at the forthcoming SC 34meetings in Stockholm this work can begin...
http://www.adjb.net/post/Document-Format-Standards-and-Patents.aspxSee also article Part 1: http://www.adjb.net/post/Reforming-Standardisation-in-JTC-1-e28093-Part-1.aspx

Consensus Emerges for Key Web Application Standard

"Browser makers, grappling with outmoded technology and a vision torebuild the Web as a foundation for applications, have begun convergingon a seemingly basic by very important element of cloud computing. Thatability is called local storage, and the new mechanism is calledIndexed DB. Indexed DB, proposed by Oracle and initially calledWebSimpleDB, is largely just a prototype at this stage, not somethingWeb programmers can use yet. But already it's won endorsements fromMicrosoft, Mozilla, and Google, and together, Internet Explorer, Firefox,and Chrome account for more than 90 percent of the usage on the Net today.
Standardization could come: advocates have worked Indexed DB into theconsiderations of the W3C, the World Wide Web Consortium thatstandardizes HTML and other Web technologies. In the W3C discussions,Indexed DB got a warm reception from Opera, the fifth-ranked browser.
It may sound perverse, but the ability to store data locally on a computerturns out to be a very important part of the Web application era that'sreally just getting under way. The whole idea behind cloud computing isto put applications on the network, liberating them from being tied toa particular computer, but it turns out that the computer still matters,because the network is neither fast nor ubiquitous. Local storage letsWeb programmers save data onto computers where it's convenient forprocessors to access. That can mean, for example, that some aspects ofGmail and Google Docs can work while you're disconnected from thenetwork. It also lets data be cached on the computer for quick accesslater. The overall state of the Web application is maintained on theserver, but stashing data locally can make cloud computing faster andmore reliable..."
An editor's draft of the W3C specification "Indexed Database API" isavailable online: " User agents need to store large numbers of objectslocally in order to satisfy off-line data requirements of Web applications.'Webs Storage' [10-September-2009 WD] is useful for storing pairs ofkeys and their corresponding values. However, it does not provide in-orderretrieval of keys, efficient searching over values, or storage ofduplicate values for a key. This specification provides a concrete APIto perform advanced key-value data management that is at the heart ofmost sophisticated query processors. It does so by using transactionaldatabases to store keys and their corresponding values (one or moreper key), and providing a means of traversing keys in a deterministicorder. This is often implemented through the use of persistent B-treedata structures that are considered efficient for insertion and deletionas well as in-order traversal of very large numbers of data records.
http://news.cnet.com/8301-30685_3-20000376-264.htmlSee also the latest editor's version for Indexed Database API: http://dev.w3.org/2006/webapi/WebSimpleDB/

IETF First Draft for Codec Requirements

Members of the IETF Internet Wideband Audio Codec (CODEC) Working Grouphave released an initial level -00 Internet Draft specification for"Codec Requirements." Additional discussion (development process,evaluation, requirements conformance, intellectual property issues) isprovided in the draft for "Guidelines for the Codec Development Withinthe IETF." The IETF CODEC Working Group was formed recently to "toensure the existence of a single high-quality audio codec that isoptimized for use over the Internet and that can be widely implementedand easily distributed among application developers, service operators,and end users."
"According to reports from developers of Internet audio applicationsand operators of Internet audio services, there are no standardized,high-quality audio codecs that meet all of the following three conditions:(1) Are optimized for use in interactive Internet applications. (2) Arepublished by a recognized standards development organization (SDO) andtherefore subject to clear change control. (3) Can be widely implementedand easily distributed among application developers, service operators,and end users. According to application developers and service operators,an audio codec that meets all three of these would: enable protocoldesigners to more easily specify a mandatory-to-implement codec intheir protocols and thus improve interoperability; enable developersto more easily easily build innovative, interactive applications forthe Internet; enable service operators to more easily deploy affordable,high-quality audio services on the Internet; and enable end users ofInternet applications and services to enjoy an improved user experience.
The "Codec Requirements" specification provides requirements for an audiocodec designed specifically for use over the Internet. The requirementsattempt to address the needs of the most common Internet interactiveaudio transmission applications and to ensure good quality whenoperating in conditions that are typical for the Internet. Theserequirements address the quality, sampling rate, delay, bit-rate, andpacket loss robustness. Other desirable codec properties are consideredas well...
In-scope applications include: (1) Point to point calls -- where pointto point calls are voice over IP (VoIP) calls from two "standard" (fixedor mobile) phones, and implemented in hardware or software. (2)Conferencing, where conferencing applications that support multi-partycalls have additional requirements on top of the requirements forpoint-to-point calls; conferencing systems often have higher-fidelityaudio equipment and have greater network bandwidth available -- especiallywhen video transmission is involved. (3) Telepresence, where mosttelepresence applications can be considered to be essentially veryhigh-quality video-conferencing environments, so all of the conferencingrequirements also apply to telepresence. (4) Teleoperation, whereteleoperation applications are similar to telepresence, with theexception that they involve remote physical interactions. (5) In-gamevoice chat, where the requirements are similar to those of conferencing,with the main difference being that narrowband compatibility is notnecessary. (6) Live distributed music performances / Internet musiclessons, and other applications, where live music requires extremelylow end-to-end delay and is one of the most demanding application forinteractive audio transmission.
http://xml.coverpages.org/draft-ietf-codec-requirements-00.txtSee also the IETF Internet Wideband Audio Codec (CODEC) Working Group Charter: http://www.ietf.org/dyn/wg/charter/codec-charter.html

Don't Look Down: The Path to Cloud Computing is Still Missing a Few Steps

This article narrates how government agencies are seeking to navigateissues of interoperability, data migrations, security, and standards inthe context of Cloud Computing. The government defines cloud computingas an on-demand model for network access, allowing users to tap into ashared pool of configurable computing resources, such as applications,networks, servers, storage and services, that can be rapidly provisionedand released with minimal management effort or service-provider interaction.
Momentum for cloud computing has been building during the past year,after the new [U.S.] administration trumpeted the approach as a way toderive greater efficiency and cost savings from information technologyinvestments. But the journey to cloud computing infrastructures willtake a few more years to unfold, federal CIOs and industry experts say.Issues of data portability among different cloud services, migration ofexisting data, security and the definition of standards for all of thoseareas are the missing rungs on the ladder to the clouds.
The Federal Cloud Computing Security Working Group, an interagencyinitiative, is working to develop the Government-Wide AuthorizationProgram (GAP), which will establish a standard set of security controlsand a common certification and accreditation program that will validatecloud computing providers...Cloud vendors need to implement multipleagency policies, which can translate into duplicative risk managementprocesses and lead to inconsistent application of federal securityrequirements.
At the user level, there are challenges associated with access controland identity management,according to Doug Bourgeois, director of theInterior Department's National Business Center.. Organizations mustextend their existing identity, access management, audit and monitoringstrategies into the cloud. However, the problem is that existingenterprise systems might not easily integrate with the cloud... An agencycannot transfer data from a public cloud provider, such as Amazon orGoogle, and put it in an infrastructure-as-a-service platform that aprivate cloud provider develops for the agency and then exchange thatdata with another type of cloud provider; that type of data transfer isdifficult because there are no overarching standards for operating in ahybrid environment...
http://gcn.com/articles/2010/03/15/cloud-computing-missing-steps.aspx

Implementing User Agent Accessibility Guidelines (UAAG) 2.0

James Allan, Kelly Ford, Jeanne Spellman (eds), W3C Technical Report
Members of the W3C User Agent Accessibility Guidelines Working Grouphave published a First Public Working Draft for "Implementing UAAG 2.0:A Guide to Understanding and Implementing User Agent AccessibilityGuidelines 2.0" and an updated version of of the "User AgentAccessibility Guidelines (UAAG) 2.0" specification. Comments on thetwo documents should be sent to the W3C public list by 16-April-2010.
The "User Agent Accessibility Guidelines (UAAG) 2.0" specification ispart of a series of accessibility guidelines published by the W3C WebAccessibility Initiative (WAI). It provides guidelines for designinguser agents that lower barriers to Web accessibility for people withdisabilities. User agents include browsers and other types of softwarethat retrieve and render Web content. A user agent that conforms tothese guidelines will promote accessibility through its own userinterface and through other internal facilities, including its abilityto communicate with other technologies (especially assistive technologies).Furthermore, all users, not just users with disabilities, should findconforming user agents to be more usable.
In addition to helping developers of browsers and media players, thedocument will also benefit developers of assistive technologies becauseit explains what types of information and control an assistive technologymay expect from a conforming user agent. Technologies not addresseddirectly by this document (e.g., technologies for braille rendering)will be essential to ensuring Web access for some users with disabilities.
The Working Draft for "Implementing UAAG 2.0" provides supportinginformation for the User Agent Accessibility Guidelines (UAAG) 2.0. Thedocument provides explanation of the intent of UAAG 2.0 success criteria,examples of implementation of the guidelines, best practice recommendationsand additional resources for the guideline. It includes a new sectionsupporting the definition of a user agent.
http://www.w3.org/TR/2010/WD-IMPLEMENTING-UAAG20-20100311/See also the updated UAAG 2.0 specification: http://www.w3.org/TR/2010/WD-UAAG20-20100311/

IETF Internet Draft: Requirements for End-to-End Encryption in XMPP

Members of the IETF Extensible Messaging and Presence Protocol (XMPP)Working Group have published an Internet Draft specifying "Requirementsfor End-to-End Encryption in the Extensible Messaging and PresenceProtocol (XMPP)." The Extensible Messaging and Presence Protocol isan open technology for real-time communication, which powers a widerange of applications including instant messaging, presence, multi-partychat, voice and video calls, collaboration, lightweight middleware,content syndication, and generalized routing of XML data.
XMPP technologies are typically deployed using a client-serverarchitecture. As a result, XMPP endpoints (often but not alwayscontrolled by human users) need to communicate through one or moreservers. For example, the user 'juliet@capulet.lit' connects to the'capulet.lit' server and the user 'romeo@montague.lit' connects to the'montague.lit' server, but in order for Juliet to send a message toRomeo the message will be routed over her client-to-server connectionwith capulet.lit, over a server-to-server connection between'capulet.lit' and 'montague.lit', and over Romeo's client-to-serverconnection with montague.lit. Although the XMPP-CORE specificationrequires support for Transport Layer Security to make it possible toencrypt all of these connections, when XMPP is deployed any of theseconnections might be unencrypted. Furthermore, even if theserver-to-server connection is encrypted and both of theclient-to-server connections are encrypted, the message would stillbe in the clear while processed by both the 'capulet.lit' and'montague.lit' servers.
Thus, end-to-end ('e2e') encryption of traffic sent over XMPP is adesirable goal. Since 1999, the Jabber/XMPP developer community hasexperimented with several such technologies, including OpenPGP, S/MIME,and encrypted sessions. More recently, the community has explored thepossibility of using Transport Layer Security (TLS) as the basetechnology for e2e encryption. In order to provide a foundation fordeciding on a sustainable approach to e2e encryption, this documentspecifies a set of requirements that the ideal technology would meet.
This specification primarily addresses communications security('commsec') between two parties, especially confidentiality, dataintegrity, and peer entity authentication. Communications security canbe subject to a variety of attacks, which RFC 3552 divides into passiveand active categories. In a passive attack, information is leaked(e.g., a passive attacker could read all of the messages that Julietsends to Romeo). In an active attack, the attacker can add, modify,or delete messages between the parties, thus disrupting communications...Ideally, any technology for end-to-end encryption in XMPP could beextended to cover any of: One-to-one communication sessions betweentwo 'online' entities, One-to-one messages that are not transferredin real time, One-to-many information broadcast, Many-to-manycommunication sessions among more than two entities. However, bothone-to-many broadcast and many-to-many sessions are deemed out-of-scopefor this document, and this document puts more weight on one-to-onecommunication sessions..."
http://xml.coverpages.org/draft-ietf-xmpp-e2e-requirements-01.txtSee also Cryptographic Key Management: http://xml.coverpages.org/keyManagement.html

Sunday, March 14, 2010

StoneGate SSL VPN Virtual Solution Supports OVF, SAML 2.0, and ADFS

"Stonesoft has introduced three new products designed to provide securemobile and remote access. This includes the new StoneGate SSL VPNVirtual solution, StoneGate SSL VPN 1.4 and StoneGate SSL-1060. TheStoneGate SSL VPN Virtual solution is based on the Open Virtual Format(OVF) standard and provides multiple features that meet the needs ofthese environments, such as strong authentication, a flexible applicationportal and support for Federation ID standards such as SAML 2.0 andADFS. The StoneGate SSL VPN Virtual Appliance is compatible with bothVMware's ESX/ESXi 3.5 and 4.0 (vSphere) versions.

The StoneGate SSL VPN Virtual solution complements the company'sStoneGate Virtual Firewall and Virtual IPS solutions for virtual andcloud computing environments. The new solution allows rapid deploymentand implementation of secure mobile access to cloud computing.

As cloud computing becomes more prevalent in corporate business andvirtualized data centers, there is a stronger need for secure accessto corporate applications in the cloud. The StoneGate SSL VPN Virtualsolution combines the need for granular access to the corporate weband legacy applications with the secure and authenticated profilingof users

The StoneGate SSL VPN 1.4 offers organizations enhanced securityprovided with integrated mobile authentication methods, granular accesscontrol and a holistic view of access rights within a single integratedaccess policy. Additionally, the appliance provides easy managementand administration of access control for all network users.Administrators can easily select the parameters, or a combination ofparameters, that will grant or deny the access to applications. Thisincludes sophisticated assessment and trace removal techniques toensure that corporate security standards are enforced at all timesfor mobile and roaming users..."

http://www.stonesoft.com/us/news_and_events/releases/2010/11032010.htmlSee also the OASIS SAML TC: http://www.oasis-open.org/committees/security/

Introduction to Pyjamas: Exploit the Synergy of GWT and Python

Pyjamas is a cool tool, or framework, for developing AsynchronousJavaScript and XML (Ajax) applications in Python. It's a versatiletool that you can use to write comprehensive applications withoutwriting any JavaScript code. This series examines the myriad aspectsof Pyjamas, and this first article explores Pyjamas's background andbasic elements.

Google's Web Toolkit (GWT) lets you develop a Rich Internet Application(RIA) with Ajax, entirely in Java code. You can use the rich Java toolset(IDEs, refactoring, code completion, debuggers, and so on) to developapplications that can be deployed on all major Web browsers. With GWTyou can write applications that behave like desktop applications butrun in the browser. Pyjamas, a GWT port, is a tool and framework fordeveloping Ajax applications in Python.

WebKit, XUL, and their ilk bring modern flair to desktop applications.Pyjamas brings WebKit to Python developers. With Webkit, Pyjamas becomesa cross-browser and cross-platform set of GUI widgets. You can developwidgets that will run anywhere WebKit and XUL run. The Pyjamas API-basedapplication can live anywhere GWT applications would live. Plus, Pyjamaslets you write desktop applications built on top of WebKit and XUL.This is preferable to building applications on top of Qt or GTK becauseWebKit supports CSS, and it is used in many other places for reliablerendering (iPhone, Safari, Android, and so on).

With Pyjamas you create containers, then add widgets to the containers.The widgets can be labels, text fields, buttons, and so forth. Widgets,like buttons, have event handlers so you can listen for click eventsfrom the button..."
http://www.ibm.com/developerworks/web/library/wa-aj-pyjamas/

Dissecting the Consortium: A Uniquely Flexible Platform for Collaboration

Andrew Updegrove, Standards Today Bulletin

"The opportunities and imperatives for collaborative action of all kindsamong both for-profit and non-profit entities are growing as the worldbecomes more interconnected and problem solving becomes less susceptibleto unilateral action. Those activities include research and development,information acquisition and sharing, group purchasing, open sourcesoftware and content creation, applying for government grant funding,and much more.
At the same time, the rapid spread of Internet and Web accessibility allowscollaborative activities to be undertaken more easily, and among morewidely distributed participants, than has ever been possible before. Butwhile the technology enabling collaboration has become ubiquitous,hard-won knowledge regarding best practices, successful governancestructures, and appropriate legal frameworks for forming and managingsuccessful collaborative activities has yet to be widely shared. As aresult, those wishing to launch new collaborative projects may havedifficulty finding reliable guidance in order to create structuresappropriate to support their activities.
In this article, I provide a list of attributes that define and functionsthat are common to consortia, an overview of how their activities aretypically staffed and supported, a comparative taxonomy of the existinglegal/governance structures that have been created to address them, andan overview of the legal concerns which consortium founders need toaddress...
Multiple forces in the world today are converging to increase the easeand raise the value of collaboration in both the public and privatesectors. Indeed, it is becoming increasingly common in business literatureto find the opinion expressed that companies that fail to collaboratewith their peers will be at a severe disadvantage to their more-willingcompetitors. In light of such opportunities, it is important for thefounders of new collaborative projects, and their legal counsel, to befamiliar with the types of frameworks available to serve as platformsfor their endeavors, and to choose wisely before launching theirinitiatives. Happily, the consortium model, in all of its variations,provides a uniquely flexible and appropriate foundation upon which thecollaborations of the future can be based..."
http://www.consortiuminfo.org/bulletins/jan10.php#feature

Expressing SNMP SMI Datatypes in XML Schema Definition Language

Mark Ellison and Bob Natale (eds), IETF Internet Draft

Members of the IETF Operations and Management Area Working Group WorkingGroup have published a revised Internet Draft for "Expressing SNMP SMIDatatypes in XML Schema Definition Language." The memo defines the IETFstandard expression of Structure of Management Information (SMI) basedatatypes in Extensible Markup Language (XML) Schema Definition (XSD)language. The primary objective of this memo is to enable the productionof XML documents that are as faithful to the SMI as possible, using XSDas the validation mechanism.

Background: "Numerous use cases exist for expressing the managementinformation described by SMI Management Information Base (MIB) modulesin XML. Potential use cases reside both outside and within the traditionalIETF network management community. For example, developers of someXML-based management applications may want to incorporate the rich setof data models provided by MIB modules. Developers of other XML-basedmanagement applications may want to access MIB module instrumentationvia gateways to SNMP agents. Such applications benefit from the IETFstandard mapping of SMI datatypes to XML datatypes via XSD.

MIB modules use SMIv2 (RFC 2578) to describe data models. For legacyMIB modules, SMIv1 (RFC 1155) was used. MIB data conveyed in variablebindings ('varbinds') within protocol data units (PDUs) of SNMP messagesuse the primitive, base datatypes defined by the SMI. The SMI allowsfor the creation of derivative datatypes, 'textual conventions' ('TCs').A TC has a unique name, has a syntax that either refines or is a baseSMI datatype and has relatively precise application-level semantics.TCs facilitate correct application-level handling of MIB data, improvereadability of MIB modules by humans and support appropriate renderingsof MIB data.

Values in varbinds corresponding to MIB objects defined with TC syntaxare always encoded as the base SMI datatype underlying the TC syntax.Thus, the XSD mappings defined in this memo provide support for valuesof MIB objects defined with TC syntax as well as for values of MIB objectsdefined with base SMI syntax. Various independent schemes have beendevised for expressing SMI datatypes in XSD. These schemes exhibit adegree of commonality, especially concerning numeric SMI datatypes, butthese schemes also exhibit sufficient differences, especially concerningthe non-numeric SMI datatypes, precluding uniformity of expression andgeneral interoperability..."

http://xml.coverpages.org/draft-ietf-opsawg-smi-datatypes-in-xsd-06.txtSee also the IETF Operations and Management Area Working Group WG Status Pages: http://tools.ietf.org/wg/opsawg/

Proposed Recommendation Call for Review: XProc - An XML Pipeline Language

Norman Walsh, Alex Milowski, Henry S. Thompson (eds), W3C PR

The W3C XML Processing Model Working Group has published a ProposedRecommendation for "XProc - An XML Pipeline Language", together with an"Implementation Report for XProc: An XML Pipeline Language." Given thatthe changes to this draft do not affect the validity of that earlierimplementation feedback, except in specific areas also now covered bymore recent implementation feedback, the Working Group is now publishingthis version as a Proposed Recommendation. The review period ends on15-April-2010; members of the public are invited to send comments onthis Proposed Recommendation to the 'public-xml-processing-model-comments'mailing list.

An XML Pipeline specifies a sequence of operations to be performed on acollection of XML input documents. Pipelines take zero or more XMLdocuments as their input and produce zero or more XML documents as theiroutput.

A pipeline consists of steps. Like pipelines, steps take zero or more XMLdocuments as their inputs and produce zero or more XML documents as theiroutputs. The inputs of a step come from the web, from the pipelinedocument, from the inputs to the pipeline itself, or from the outputs ofother steps in the pipeline. The outputs from a step are consumed byother steps, are outputs of the pipeline as a whole, or are discarded.There are three kinds of steps: atomic steps, compound steps, andmulti-container steps. Atomic steps carry out single operations and haveno substructure as far as the pipeline is concerned. Compound steps andmulti-container steps control the execution of other steps, which theyinclude in the form of one or more subpipelines.

The result of evaluating a pipeline (or subpipeline) is the result ofevaluating the steps that it contains, in an order consistent with theconnections between them. A pipeline must behave as if it evaluated eachstep each time it is encountered. Unless otherwise indicated,implementations must not assume that steps are functional (that is, thattheir outputs depend only on their inputs, options, and parameters) orside-effect free. The pattern of connections between steps will notalways completely determine their order of evaluation. The evaluationorder of steps not connected to one another is implementation-dependent...A typical step has zero or more inputs, from which it receives XMLdocuments to process, zero or more outputs, to which it sends XMLdocument results, and can have options and/or parameters. An atomicstep is a step that performs a unit of XML processing, such as XIncludeor transformation, and has no internal subpipeline. ] Atomic steps carryout fundamental XML operations and can perform arbitrary amounts ofcomputation, but they are indivisible. An XSLT step, for example,performs XSLT processing; a Validate with XML Schema step validates oneinput with respect to some set of XML Schemas, etc..."

http://www.w3.org/TR/2010/PR-xproc-20100309/See also the XProc Implementation Report: http://www.w3.org/XML/XProc/2010/02/ir.html

OASIS Blue Member Section: Open Standards for Smart Energy Grids

OASIS has announced the formation of a new Member Section, OASIS Blue,which will bring together a variety of open standards projects relatedto energy, intelligent buildings, and natural resources. OASIS Blue willleverage the innovation of existing electronic commerce standards andthe power of the Internet to achieve meaningful sustainability. Aninternational effort, OASIS Blue incorporates work that has identifiedas a central deliverable for the U.S. government's strategic Smart Gridinitiative. OASIS Blue welcomes suggestions for forming new Committeesrelated to its mission.
The collaboration incoudes IBM, Constellation NewEnergy, CPower, EnerNOC,Grid Net, HP, NeuStar, TIBCO, U.S. Department of Defense, U.S. NationalInstitute of Standards and Technology (NIST), and others.
Several Technical Committees will coordinate efforts under OASIS Blue.The Energy Interoperation Technical Committee defines standards for thecollaborative and transactive use of energy within demand response anddistributed energy resources. The Energy Market Information Exchange(eMIX) Technical Committee works on exchanging pricing information andproduct definitions in energy markets. The Open Building InformationExchange (oBIX) Technical Committee enables mechanical and electricalcontrol systems in buildings to communicate with enterprise applications.Members of the oBIX TC plan to use the WS-Calendar specification tocoordinate control system performance expectations with enterprise andsmart grid activities.
David Chassin of Pacific Northwest National Laboratory, chair of theOASIS Blue Steering Committee: "OASIS Blue provides a safe, neutralenvironment where stakeholders can cooperate to define clear taxonomiesand information-sharing protocols that will be recognized by theinternational standards community." Other OASIS Blue Steering Committeemembers include Steven Bushby of NIST, Bob Dolin of Echelon, Rik Drummondof the Drummond Group, Girish Ghatikar of Lawrence Berkeley NationalLaboratory, Francois Jammes of Schneider Electric, Arnaud Martens ofBelgian SPF Finances, Dana K. "Deke" Smith of buildingSMART alliance,and Jane L. Snowdon, Ph.D., of IBM.

IETF Internet Draft: Security Requirements for HTTP

Jeff Hodges and Barry Leiba (eds), IETF Internet Draft

An updated version of the IETF Informational Internet Draft has beenpublished docmenting "Security Requirements for HTTP." Recent InternetEngineering Steering Group (IESG) practice dictates that IETF protocolsmust specify mandatory-to-implement (MTI) security mechanisms, so thatall conformant implementations share a common baseline. This documentexamines all widely deployed HTTP security technologies, and analyzesthe trade-offs of each. The document examines the effects of applyingsecurity constraints to Web applications, documents the properties thatresult from each method, and will make Best Current Practicerecommendations for HTTP security in a later document version.
Some existing HTTP Security Mechanisms include: Forms And Cookies, HTTPAccess Authentication [Basic Authentication, Digest Authentication,Authentication Using Certificates in TLS, Other Access AuthenticationSchemes, Centrally-Issued Tickets, and Web Services security mechanisms.In addition to using TLS for client and/or server authentication, itis also very commonly used to protect the confidentiality and integrityof the HTTP session. For instance, both HTTP Basic authentication andCookies are often protected against snooping by TLS. It should be notedthat, in that case, TLS does not protect against a breach of thecredential store at the server or against a keylogger or phishinginterface at the client. TLS does not change the fact that BasicAuthentication passwords are reusable and does not address that weakness.
Is is possible that HTTP will be revised in the future. "HTTP/1.1" (RFC2616) and "Use and Interpretation of HTTP Version Numbers" (RFC 2145)define conformance requirements in relation to version numbers. In HTTP1.1, all authentication mechanisms are optional, and no single transportsubstrate is specified. Any HTTP revision that adds a mandatory securitymechanism or transport substrate will have to increment the HTTP versionnumber appropriately. All widely used schemes are non-standard and/orproprietary..."