Search This Blog

Thursday, March 18, 2010

Public Data: Translating Existing Models to RDF

"As we encourage linked data adoption within the UK public sector,something we run into again and again is that (unsurprisingly) particulardomain areas have pre-existing standard ways of thinking about the datathat they care about. There are existing models, often with multipleserialisations, such as in XML and a text-based form, that are supportedby existing tool chains. In contrast, if there is existing RDF in thatdomain area, it's usually been designed by people who are more interestedin the RDF than in the domain area, and is thus generally more focusedon the goals of the typical casual data re-user rather than theprofessionals in the area...
To give an example, the international statistics community uses SDMXfor representing and exchanging statistics... SDMX includes a well-thoughtthrough model for statistical datasets and the observations within them,as well as standard concepts for things like gender, age, unit multipliersand so on. By comparison, SCOVO, the main RDF model for representingstatistics, barely scratches the surface in comparison. This isn't theonly example: the INSPIRE Directive defines how geographic informationmust be made available. GEMINI defines the kind of geospatial metadatathat that community cares about. The Open Provenance Model is the resultof many contributors from multiple fields, and again has a number ofserialisations.
You could view this as a challenge: experts in their domains already havemodels and serialisations for the data that they care about; how can wepersuade them to adopt an RDF model and serialisations instead? Butthat's totally the wrong question. Linked data doesn't, can't and won'treplace existing ways of handling data. The question is really abouthow to enable people to reap these benefits; the answer, becauseHTTP-based addressing and typed linkage is usually hard to introduceinto existing formats, is usually to publish data using an RDF-basedmodel alongside existing formats. This might be done by generating anRDF-based format (such as RDF/XML or Turtle) as an alternative to thestandard XML or HTML, accessible via content negotiation, or byproviding a GRDDL transformation that maps an XML format into RDF/XML...
Modelling is a complex design activity, and you're best off avoidingdoing it if you can. That means reusing conceptual models that have beenbuilt up for a domain as much as possible and reusing existing vocabularieswherever you can. But you can't and shouldn't try to avoid doing designwhen mapping from a conceptual model to a particular modelling paradigmsuch as a relational, object-oriented, XML or RDF model. If you'remapping to RDF, remember to take advantage of what it's good at suchas web-scale addressing and extensibility, and always bear in mind howeasy or difficult your data will be to query. There is no pointpublishing linked data if it is unusable..." also Linked Data:

There is REST for the Weary Developer

This brief article provides an example of working with theRepresentational State Transfer style of software architecture. REST(Representational State Transfer) is a style of software architecturefor accessing information on the Web. The RESTful service refers toweb services as resources that use XML over the HTTP protocol. Theterm REST dates back to 2000, when Roy Fielding used it in his doctoraldissertation. The W3C recommends using WSDL 2.0 as the language fordefining REST web services. To explain REST, we take an example ofpurchasing items from a catalog application...
First we will define CRUD operations for this service as following. Theterm CRUD stands for basic database operations Create, Read, Update, andDelete. In the example, you can see that creating a new item with Idis not supported. When a request for new item is received, Id is createdand assigned to the new item. Also, we are not supporting the updateand delete operations for the collection of items. Update and delete aresupported for the individual items...
Interface documents: How does the client know what to expect in returnwhen it makes a call for CRUD operations? The answer is the interfacedocument. In this document you can define the CRUD operation mapping,Item.xsd file, and request and response XML. You can have separate XSDfor request and response, or response can have text such as 'success'in return for the methods other than GET...
There are other frameworks available for RESTful Services. Some of themare listed here: Sun reference implementation for JAX-RS code-namedJersey, where Jersey uses a HTTP web server called Grizzly, and theServlet Grizzly Servlet; Ruby on Rails; Restlet; Django; Axis2a.

Now IBM's Getting Serious About Public IaaS

James Staten, Forrester Blog

"IBM has been talking a good cloud game for the last year or so. Theyhave clearly demonstrated that they understand what cloud computingis, what customers want from it and have put forth a variety of offeringsand engagements to help customers head down this path -- mostly throughinternal cloud and strategic rightsourcing options.
But its public cloud efforts, outside of application hosting have beena bit of wait and see. Well the company is clearly getting its acttogether in the public cloud space with today's announcement of theSmart Business Development and Test Cloud, a credible public Infrastructureas a Service (IaaS) offering. This new service is an extension of itsdeveloperWorks platform and gives its users a virtual environment throughwhich they can assemble, integrate and validate new applications. Pricingon the service is as you would expect from an IaaS offering, and freefor a limited time...
Certainly any IaaS can be used for test and development purposes so IBMisn't breaking new ground here. But its off to a solid start with statedsupport from test and dev specialist partners SOASTA, VMLogix, AppFirstand Trinity Software bring their tools to the IBM test cloud..." also Jeffrey Schwartz in GCN:

Aggregative Digital Libraries: D-NET Software Toolkit and OAIster System

"Aggregative Digital Library Systems (ADLSs) provide end users with webportals to operate over an information space of descriptive metadatarecords, collected and aggregated from a pool of possibly heterogeneousrepositories. Due to the costs of software realization and systemmaintenance, existing "traditional" ADLS solutions are not easilysustainable over time for the supporting organizations. Recently, theDRIVER EC project proposed a new approach to ADLS construction, basedon Service-Oriented Infrastructures. The resulting D-NET software toolkitenables a running, distributed system in which one or multipleorganizations can collaboratively build and maintain theirservice-oriented ADLSs in a sustainable way. Aggregative Digital LibrarySystems (ADLSs) typically address two main challenges: (1) populating aninformation space of metadata records by harvesting and normalizingrecords from several OAI-PMH compatible repositories; and (2) providingportals to deliver the functionalities required by the user communityto operate over the aggregated information space, for example, search,annotations, recommendations, collections, user profiling, etc.
Repositories are defined here as software systems that typically offerfunctionalities for storing and accessing research publications andrelative metadata information. Access usually has the twofold form ofsearch through a web portal and bulk metadata retrieval through OAI-PMHinterfaces. In recent years, research institutions, university libraries,and other organizations have been increasingly setting up repositoryinstallations (based on technologies such as Fedora, ePrints, DSpace,Greenstone, OpenDlib, etc) to improve the impact and visibility of theiruser communities' research outcomes.
In this paper, we advocate that D-NET's 'infrastructural' approach toADLS realization and maintenance proves to be generally more sustainablethan 'traditional' ones. To demonstrate our thesis, we report on thesustainability of the 'traditional' OAIster System ADLS, based on DLXSsoftware (University of Michigan), and those of the 'infrastructural'DRIVER ADLS, based on D-NET.
As an exemplar of traditional solutions we rely on the well-known OAIsterSystem, whose technology was realized at the University of Michigan.The analysis will show that constructing static or evolving ADLSs usingD-NET can notably reduce software realization costs and that, forevolving requirements, refinement costs for maintenance can be mademore sustainable over time..."

Definitions for Expressing Standards Requirements in IANA Registries

The Internet Engineering Steering Group (IESG) has received a requestto consider the specification "Definitions for Expressing StandardsRequirements in IANA Registries" as a Best Current Practice RFC (BCP).The IESG plans to make a decision in the next few weeks, and solicitsfinal comments on this action; please send substantive comments to theIETF mailing lists by 2010-04-14.
Abstract: "RFC 2119 defines words that are used in IETF standardsdocuments to indicate standards compliance. These words are fine fordefining new protocols, but there are certain deficiencies in usingthem when it comes to protocol maintainability. Protocols are maintainedby either updating the core specifications or via changes in protocolregistries. For example, security functionality in protocols oftenrelies upon cryptographic algorithms that are defined in externaldocuments. Cryptographic algorithms have a limited life span, and newalgorithms regularly phased in to replace older algorithms. This documentproposes standard terms to use in protocol registries and possibly instandards track and informational documents to indicate the life cyclesupport of protocol features and operations.
The proposed requirement words for IANA protocol registries include thefollowing. (1) MANDATORY This is the strongest requirement and for animplementation to ignore it there MUST be a valid and serious reason.(2) DISCRETIONARY, for Implementations: Any implementation MAY or MAYNOT support this entry in the protocol registry. The presence oromission of this MUST NOT be used to judge implementations on standardscompliance (and for) Operations: Any use of this registry entry inoperation is supported, ignoring or rejecting requests using this protocolcomponent MUST NOT be used as bases for asserting lack of compliance.(3) OBSOLETE for Implementations means new implementations SHOULD NOTsupport this functionality, and for Operations, means any use of thisfunctionality in operation MUST be phased out. (4) ENCOURAGED: Thisword is added to the registry entry when new functionality is added andbefore it is safe to rely solely on it. Protocols that have the abilityto negotiate capabilities MAY NOT need this state. (5) DISCOURAGED meansthis requirement is placed on an existing function that is being phasedout. This is similar in spirit to both MUST- and SHOULD- as defined andused in certain RFC's such as RFC 4835. (6) RESERVED: Sometimes thereis a need to reserve certain values to avoid problems such as valuesthat have been used in implementations but were never formally registered.In other cases reserved values are magic numbers that may be used inthe future as escape valves if the number space becomes too small. (7)AVAILABLE is a value that can be allocated by IANA at any time..."
This document is motivated by the experiences of the editors in tryingto maintain registries for DNS and DNSSEC. For example, DNS defines aregistry for hash algorithms used for a message authentication schemecalled TSIG, the first entry in that registry was for HMAC-MD5. TheDNSEXT working group decided to try to decrease the number of algorithmslisted in the registry and add a column to the registry listing therequirements level for each one. Upon reading that HMAC-MD5 was taggedas 'OBSOLETE' a firestorm started. It was interpreted as the DNScommunity making a statement on the status of HMAC-MD5 for all uses. also 'Using MUST and SHOULD and MAY':

New Models of Human Language to Support Mobile Conversational Systems

W3C has announced a Workshop on Conversational Applications: Use Casesand Requirements for New Models of Human Language to Support MobileConversational Systems. The workshop will be held June 18-19, 2010in New Jersey, US, hosted by Openstream. The main outcome of theworkshop will be the publication of a document that will serve as aguide for improving the W3C language model. W3C membership is notrequired to participate in this workshop. The current program committeeconsists of: Paolo Baggia (Loquendo), Daniel C. Burnett (Voxeo),Deborah Dahl (W3C Invited Expert), Kurt Fuqua (Cambridge Mobile),Richard Ishida (W3C), Michael Johnston (AT&T), James A. Larson (W3CInvited Expert), Sol Lerner (Nuance), David Nahamoo (IBM), Dave Raggett(W3C), Henry Thompson (W3C/University of Edinburgh), and Raj Tumuluri(Openstream).
"A number of developers of conversational voice applications feel thatthe model of human language currently supported by W3C standards suchas SRGS, SISR and PLS is not adequate and that developers need newcapabilities in order to support more sophisticated conversationalapplications. The goal of the workshop therefore is to understand thelimitations of the current W3C language model in order to develop amore comprehensive model. We plan to collect and analyze use cases andprioritize requirements that ultimately will be used to identifyimprovements to the W3C language model. Just as W3C developed SSML 1.1to broaden the languages for which SSML is useful, this effort willresult in improved support for language capabilities that areunsupported today.
Suggested Workshop topics for position papers include: (1) Use casesand requirements for grammar formalisms more powerful than SRGS'scontext free grammars that are needed to implement tomorrow'sapplications (2) What are the common aspects of human language modelsfor different languages that can be factored into reusable modules?(3) Use cases and requirements for realigning/extending SRGS, PLS andSISR to support more powerful human language models (4) Use cases andrequirements for sharing grammars among concurrent applications (5) Usecases that illustrate requirements for natural language capabilitiesfor conversational dialog systems that cannot easily be implementedusing the current W3C conversational language model. (6) Use cases andrequirements for speech-enabled applications that can be used acrossmultiple languages (English, German, Spanish, ...) with only minormodifications. (7) Use cases and requirements for composing thebehaviors of multiple speech-enabled applications that were developedindependently without requiring changes to the applications. (8) Usecases and requirements motivating the need to resolve ellipses andanaphoric references to previous utterances.
Position papers, due April 2, 2010, must describe requirements and usecases for improving W3C standards for conversational interaction andhow the use cases justify one or more of these topics: Formal notationsfor representing grammar in: Syntax, Morphology, Phonology, Prosodics;Engine standards for improvement in processing: Syntax, Morphology,Phonology, Lexicography; Lexicography standards for: parts-of-speech,grammatical features and polysemy; Formal semantic representation ofhuman language including: verbal tense, aspect, valency, plurality,pronouns, adverbs; Efficient data structures for binary representationand passing of: parse trees, alternate lexical/morphologic analysis,alternate phonologic analysis; Other suggested areas or improvementsfor standards based conversational systems development..." also W3C Workshops:

Integrating Composite Applications on the Cloud Using SCA

"Elastic computing has made it possible for organizations to use cloudcomputing and a minimum of computing resources to build and deploy anew generation of applications. Using the capabilities provided bythe cloud, enterprises can quickly create hybrid composite applicationson the cloud using the best practices of service-component architectures(SCA).
Since SCA promotes all the best practices used in service-orientedarchitectures (SOA), building composite applications using SCA is oneof the best guidelines for creating cloud-based composite applications.Applications created using several different runtimes running on thecloud can be leveraged to create a new component , as well as hybridcomposite applications which scale on-demand with private/public cloudmodels can also be built using secure transport data channels.
In this article, we show how to build and integrate composite applicationsusing Apache Tuscany, the Eucalyptus open source cloud framework, andOpenVPN to create a hybrid composite application. To show that distributedapplications comprising of composite modules (distributed across thecloud and enterprise infrastructure) can be integrated and function asa single unit using SCA without compromising on security, we create acomposite application that components spread over different domainsdistributed across the cloud and the enterprise infrastructure. We thenuse SCA to host and integrate this composite application so that itfulfills the necessary functional requirements. To ensure informationand data security, we set up a virtual private network (VPN) betweenthe different domains (cloud and enterprise), creating a point-to-pointencrypted network which provides secure information exchange betweenthe two environments...
This project illustrates that distributed applications comprising ofcomposite modules (distributed across the cloud and EnterpriseInfrastructure) can be integrated and made to function as a single unitusing Service Component Architecture (SCA) without compromising onsecurity..."

IETF Update: Specification for a URI Template

A revised version of the IETF Standards Track Internet Draft "URI Template"has been published. From the abstract: "A URI Template is a compactsequence of characters for describing a range of Uniform ResourceIdentifiers through variable expansion. This specification defines theURI Template syntax and the process for expanding a URI Template intoa URI, along with guidelines for the use of URI Templates on the Internet.
Overview: "A Uniform Resource Identifier (URI) is often used to identifya specific resource within a common space of similar resources... URITemplates provide a mechanism for abstracting a space of resourceidentifiers such that the variable parts can be easily identified anddescribed. URI templates can have many uses, including discovery ofavailable services, configuring resource mappings, defining computed links,specifying interfaces, and other forms of programmatic interaction withresources.
A URI Template provides both a structural description of a URI space and,when variable values are provided, a simple instruction on how to constructa URI corresponding to those values. A URI Template is transformed intoa URI-reference by replacing each delimited expression with its value asdefined by the expression type and the values of variables named withinthe expression. The expression types range from simple value expansionto multiple key=value lists. The expansions are based on the URI genericsyntax, allowing an implementation to process any URI Template withoutknowing the scheme-specific requirements of every possible resulting URI.
A URI Template may be provided in absolute form, as in the examples above,or in relative form if a suitable base URI is defined... A URI Templateis also an IRI template, and the result of template processing can berendered as an IRI by transforming the pct-encoded sequences to theircorresponding Unicode character if the character is not in the reservedset... Parsing a valid URI Template expression does not require buildinga parser from the given ABNF. Instead, the set of allowed characters ineach part of URI Template expression has been chosen to avoid complexparsing, and breaking an expression into its component parts can beachieved by a series of splits of the character string. Example Pythoncode [is planned] that parses a URI Template expression and returns theoperator, argument, and variables as a tuple..."

What Standardization Will Mean For Ruby

Mirko Stocker, InfoQueue
Ruby's inventor Matz announced plans to standardize Ruby in order to"improve the compatibility between different Ruby implementations [..]and to ease Ruby's way into the Japanese government". The firstproposal for standardization will be to the Japanese IndustrialStandards Committee and in a further step to the ISO, to become aninternational standard. For now, a first draft (that weighs in at over300 pages) and official announcement are available. Alternatively,there's a wiki under development to make the standard available inHTML format.A very different approach to unite Ruby implementations is theRubySpec project -- a community driven effort to build an executablespecification. RubySpec is an offspring of the Rubinius project...[But] What do our readers think: will it be easier to introduce Rubyin their organizations if there's an ISO standard behind it?"According to RubySpec lead Brian Ford: "I think the ISO Standardizationeffort is very important for Ruby, both for the language and for thecommunity, which in my mind includes the Ruby programmers, people whouse software written in Ruby, and the increasing number of businessesbased on or using software written in Ruby. The Standardization documentand RubySpec are complementary in my view. The document places primaryimportance on describing Ruby in prose with appropriate formattingformalities. The document envisions essentially one definition of Ruby.RubySpec, in contrast, places primary importance on code that demonstratesthe behavior of Ruby. However, RubySpec also emphasizes describing Rubyin prose as an essential element of the executable specification and isthe reason we use RSpec-compatible syntax. RubySpec also attempts tocapture the behavior of the union of all Ruby implementations. Itprovides execution guards that document the specs for differences betweenimplementations. For example, not all platforms used to implement Rubysupport forking a process. So the specs have guards for whichimplementations provide that feature... This illustrates an importantdifference between the ISO Standardization document and RubySpec. TheISO document can simply state that a particular aspect of the languageis "implementation defined" and provide no further guidance. Unfortunately,implementing such a standard can be difficult, as we have seen withthe confusion caused by various browser vendors attempting to implementCSS. RubySpec attempts to squeeze the total number of unspecified Rubybehaviors to the smallest size possible..." also the Ruby Standard Wiki:

New Release of Oxygen XML Editor and Oxygen XML Author Supports DITA

Developers of the Oxygen XML Editor and Author toolsuite have announcedthe immediate availability of version 11.2 of the XML Editor and XMLAuthor containing a comprehensive set of tools supporting all the XMLrelated technologies. Oxygen combines content author features like theCSS driven Visual XML editor with a fully featured XML developmentenvironment. It has ready to use support for the main document frameworksDITA, DocBook, TEI and XHTML and also includes support for all XML Schemalanguages, XSLT/XQuery Debuggers, WSDL analyzer, XML Databases, XMLDiff and Merge, Subversion client and more.New features in version 11.2: Version 11.2 of Oxygen XML Editor improvesthe XML authoring, the XML development tools, the support for largedocuments and the SVN Client. The visual XML editing (Author mode) isavailable now as a separate component that can be integrated in Javaapplications or, as an Applet, in Web applications. A sample Webapplication showing the Author component in the browser, as an Applet,editing DITA documents is available...Other XML Author improvements include support for preserving theformatting for unchanged elements and an updated Author API containinga number of new extensions that allow customizing the Outline, theBreadcrumb and the Status Bar. The XSLT Debugger provides more flexibilityand it is the first debugger that can step inside XPath 2.0 expressions.The Saxon 9 EE bundled with Oxygen can be used to run XQuery 1.1transformations. The XProc support was aligned with the recent updateas W3C Proposed Recommendation and includes the latest Calabash XProcprocessor.In 'Author for DITA' there is support for Reusable Components: A fragmentof a topic can be extracted in a separate file for reuse in differenttopics. The component can be reused by inserting an element with a conrefattribute where the content of the component is needed. This works withoutany additional configuration and supports any DITA specialization.Similarly, there's support for Content References Management: The DITAframework includes actions for adding, editing and removing a contentreference (conref, conkeyref, conrefend attributes) to/from an existingelement... A new schema caching mechanism allows to quickly open largeDITA Maps and their referred topics..." also XML Author Component for the DITA Documentation Framework:

HTML5, Hardware Accelerated: First IE9 Platform Preview Available

Dean Hachamovitch, Windows Internet Explorer WeblogAt the Las Vegas MIX10 Conference, Microsoft Internet Explorerdevelopers demonstrated "how the standard web patterns that developersalready know and use broadly run better by taking advantage of PChardware through IE9 on Windows." A blog article by Dean Hachamovitchprovides an overview of what we showed, "across performance, standards,hardware-accelerated HTML5 graphics, and the availability of the IE9Platform Preview for developers...First, we showed IE9's new script engine, internally known as 'Chakra,'and the progress we've made on an industry benchmark for JavaScriptperformance... We showed our progress in making the same standards-basedHTML, script, and formatting markup work across different browsers.We shared the data and framework that informed our approach, anddemonstrated better support for several standards: HTML5, DOM, andCSS3. We showed IE9's latest Acid3 score (55); as we make progress onthe industry goal of having the same markup that developers actuallyuse working across browsers, our Acid3 score will continue to go up...In several demonstrations, we showed the significant performance gainsthat graphically rich, interactive web pages enjoy when a browser takesfull advantage of the PC's hardware capabilities through the operatingsystem. The same HTML, script, and CSS markup work across severaldifferent browsers; the pages just run significantly faster in IE9because of hardware-accelerated graphics. IE9 is also the first browserto provide hardware-accelerated SVG support...The goal of standardsand interoperability is that the same HTML, script, and formattingmarkup work the same across different browsers. Eliminating the needfor different code paths for different browsers benefits everyone,and creates more opportunity for developers to innovate.The main technologies to call out here broadly are HTML5, CSS3, DOM,and SVG. The IE9 test drive site has more specifics and samples. Atthis time, we're looking for developer feedback on our implementationof HTML5's parsing rules, Selection APIs, XHTML support, and inlineSVG. Within CSS3, we're looking for developer feedback on IE9's supportfor Selectors, Namespaces, Colors, Values, Backgrounds and Borders,and Fonts. Within DOM, we're looking for developer feedback on IE9'ssupport for Core, Events, Style, and Range... As IE makes more progresson the industry goal of 'same markup' for standards and parts ofstandards that developers actually use, the Acid3 score will continueto go up as a result. A key part of our approach to web standards isthe development of an industry standard test suite. Today, Microsofthas submitted over 100 additional tests of HTML5, CSS3, DOM, and SVGto the W3C..." also Paul Krill's InfoWorld article:

Open Source of ebMS V3 Message Handler and AS4 Profile on Sourceforge

Holodeck is an open source version of ebXML Messaging Version 3 andits AS4 profile is now available on Sourceforge with onlinedocumentation. The ebXML Messaging V3 specification defines acommunications-protocol neutral method for exchanging electronicbusiness messages. It defines specific Web Services-based envelopingconstructs supporting reliable, secure delivery of business information.Furthermore, the specification defines a flexible enveloping technique,permitting messages to contain payloads of any format type...The OASIS specification "AS4 Profile of ebMS V3" abstract: "While ebMS3.0 represents a leap forward in reducing the complexity of Web ServicesB2B messaging, the specification still contains numerous options andcomprehensive alternatives for addressing a variety of scenarios forexchanging data over a Web Services platform. The AS4 profile of theebMS 3.0 specification has been developed in order to bring continuityto the principles and simplicity that made AS2 successful, whileadding better compliance to Web services standards, and features suchas message pulling capability and a built-in Receipt mechanism. UsingebMS 3.0 as a base, a subset of functionality is defined along withimplementation guidelines adopted based on the 'just-enough' designprinciples and AS2 functional requirements to trim down ebMS 3.0 intoa more simplified and AS2-like specification for Web Services B2Bmessaging. This document defines the AS4 profile as a combination ofa conformance profile that concerns an implementation capability, andof a usage profile that concerns how to use this implementation. Acouple of variants are defined for the AS4 conformance profile -- theAS4 ebHandler profile and the AS4 Light Client profile -- that reflectdifferent endpoint capabilities."Holodeck's primary goal is to provide an Open-Source product for B2Bmessaging based on ebXML Messaging version 3 that can be used by ebXMLcommunities as well as WebServices communities. Because ebXML Messagingversion 3 is compatible with webservices, Holodeck provides anintegration of ebXML, webservices and AS4 in one package. Holodeckcan be used in the following scenarios: (1) Pure ebXML messaging inthe B2B or within different departments of the same company. (2)Messaging Gateway to an ESB. The ESB providing an integration withina company, while Holodeck playing the gateway to communicate with theexternal world via messaging. (3) An environment where there is a needfor both Webservice consumption and heavy B2B messaging where webservices fail...Holodeck comes with a scalable architecture: datastore for messages(JDO by default, a MySQL pre-configured option, and interfaces toother databases), and streaming for large messages (based on Axis2streaming). The project is funded and maintained by Fujitsu America,Inc. This package comes with a "no coding necessary" out-of-the-boxexperience and tutorials, allowing you to deploy and test withouthaving to write code up-front, using a directory system as applicationlayer substitute to store as files elements of messages to be sent,and to receive them. Developers can download binaries and source code,and get a fresh copy directly from "Subversion" versioning system... also the Holodeck resources from SourceForge:

IESG Issues Last Call Review for MODS/MADS/METS/MARCXML/SRU Media Types

The Internet Engineering Steering Group (IESG) has received a requestfrom an individual submitter the following Standards Track I-D as anIETF Proposed Standard: "The Media Types application/mods+xml,application/mads+xml, application/mets+xml, application/marcxml+xml,application/sru+xml." The IESG plans to make a decision in the nextfew weeks, and solicits final comments on this action; please sendsubstantive comments to the IETF lists by 2010-04-12.This document "specifies Media Types for the following formats: MODS(Metadata Object Description Schema), MADS (Metadata AuthorityDescription Schema), METS (Metadata Encoding and Transmission Standard),MARCXML (MARC21 XML Schema), and the SRU (Search/Retrieve via URLResponse Format) Protocol response XML schema. These are all XMLschemas providing representations of various forms of informationincluding metadata and search results.The U.S. Library of Congress, on behalf of and in collaboration withvarious components of the metadata and information retrieval community,has issued specifications which define formats for representation ofvarious forms of information including metadata and search results.This memo provides information about the Media Types associated withseveral of these formats, all of which are XML schemas. (1) 'MODS:Metadata Object Description Schema' is an XML schema for a bibliographicelement set that may be used for a variety of purposes, and particularlyfor library applications. (2) 'MADS: Metadata Authority DescriptionSchema' is an XML schema for an authority element set used to providemetadata about agents (people, organizations), events, and terms(topics, geographics, genres, etc.). It is a companion to the MODSSchema. (3) 'METS: Metadata Encoding and Transmission Standard" definesan XML schema for encoding descriptive, administrative, and structuralmetadata regarding objects within a digital library.(4) 'MARCXML MARC21 XML Schema' is an XML schema for the direct XMLrepresentation of the MARC format (for which there already exists amedia type, application/marc; By 'direct XML representation'is is meantthat it encodes the actual MARC data within XML... (5) 'SRU: Search/Retrieve via URL Response Format' provides an XML schema for the SRUresponse. SRU is a protocol, and the media type 'sru+xml' pertainsspecifically to the default SRU response. the SRU response may besupplied in any of a number of suitable schemas, RSS, ATOM, for example,and the client identifies the desired format in the request, hence theneed for a media type. This mechanism will be introduced in SRU 2.0;in previous versions (that is, all versions to date; 2.0 is indevelopment) all responses are supplied in the existing default format,so no media type was necessary. SRU 2.0 is being developed within OASIS. also IANA registration for MIME Media Types:

OASIS SCA-C-C++ Technical Committee Publishes Two Public Review Drafts

Bryan Aupperle, David Haney, Pete Robbins (eds), OASIS Review DraftsMembers of the OASIS Service Component Architecture / C and C++(SCA-C-C++) Technical Committee have released two Committee Drafts forpublic review through March 25, 2010. This TC is part of the OASISOpen Composite Services Architecture (Open CSA) Member Section advancesopen standards that simplify SOA application development. Open CSAbrings together vendors and users from around the world to collaborateon standard ways to unify services regardless of programming languageor deployment platform. Open CSA promotes the further development andadoption of the Service Component Architecture (SCA) and Service DataObjects (SDO) families of specifications. SCA helps organizations moreeasily design and transform IT assets into reusable services that canbe rapidly assembled to meet changing business requirements. SDO letsapplication programmers uniformly access and manipulate data fromheterogeneous sources, including relational databases, XML data sources,Web services, and enterprise information systems."Service Component Architecture Client and Implementation Model for C++Specification Version 1.1" describes "the SCA Client and ImplementationModel for the C++ programming language. The SCA C++ implementationmodel describes how to implement SCA components in C++. A componentimplementation itself can also be a client to other services providedby other components or external services. The document describes howa C++ implemented component gets access to services and calls theiroperations. Thisdocument also explains how non-SCA C++ components canbe clients to services provided by other components or external services.The document shows how those non-SCA C++ component implementationsaccess services and call their operations.""Service Component Architecture Client and Implementation Model for CSpecification Version 1.1" describes "the SCA Client and ImplementationModel for the C programming language. The SCA C implementation modeldescribes how to implement SCA components in C. A componentimplementation itself can also be a client to other services providedby other components or external services. The document describes howa component implemented in C gets access to services and calls theiroperations. The document also explains how non-SCA C components canbe clients to services provided by other components or externalservices. The document shows how those non-SCA C componentimplementations access services and call their operations."The OASIS SCA-C-C++ TC is developing "the C and C++ programming modelfor clients and component implementations using the Service ComponentArchitectire (SCA). SCA defines a model for the creation of businesssolutions using a Service-Oriented Architecture, based on the conceptof Service Components which offer services and which make referencesto other services. SCA models business solutions as compositions ofgroups of service components, wired together in a configuration thatsatisfies the business goals. SCA applies aspects such as communicationmethods and policies for infrastructure capabilities such as securityand transactions through metadata attached to the compositions." also the Model for C specification:

Early Draft Review for JSR-310 Specification: Date and Time API

Stephen Colebourne, Michael Nascimento Santos (et al, eds), JSR DraftProject editors for Java Specification Request 310: Date and Time APIhave published an Early Draft Review (EDR) to to gain feedback on anearly version of the JSR. The contents of the EDR are the prosespecification and the javadoc. According to the original publishedRequest, JSR 310 "will provide a new and improved date and time API forJava. The main goal is to build upon the lessons learned from the firsttwo APIs (Date and Calendar) in Java SE, providing a more advanced andcomprehensive model for date and time manipulation.The new API will be targeted at all applications needing a data modelfor dates and times. This model will go beyond classes to replace Dateand Calendar, to include representations of date without time, timewithout date, durations and intervals. This will raise the quality ofapplication code. For example, instead of using an int to store aduration, and javadoc to describe it as being a number of days, thedate and time model will provide a class defining it unambiguously.The new API will also tackle related date and time issues. These includeformatting and parsing, taking into account the ISO8601 standard andits implementations, such as XML. In addition, the areas of serializationand persistence will be considered... In this specification model,dates and times are separated into two basic use cases: machine-scaleand human-scale. Machine-scale time represents the passage of timeusing a single, continually incrementing number. The rules thatdetermine how the scale is measured and communicated are typicallydefined by international scientific standards organisations. Human-scaletime represents the passage of time using a number of named fields,such as year, month, day, hour, minute and second. The rules thatdetermine how the fields work together are defined in a calendar system...From the specification introduction: "Many Java applications requirelogic to store and manipulate dates and times. At present, Java SEprovides a number of disparate APIs for this purpose, including Date,Calendar, SQL Date/Time/Timestamp and XML Duration/XMLGregorianCalendar.Unfortunately, these APIs are not all particularly well-designed andthey do not cover many use cases needed by developers. As an example,Java developers currently have no standard Java SE class to representthe concept of a date without a time, a time without a date or aduration. The result of these missing features has been widespreadabuse of the facilities which are provided, such as using the Date orCalendar class with the time set to midnight to represent a datewithout a time. Such an approach is very error-prone - there arecertain time zones where midnight doesn't exist once a year due tothe daylight saving time cutover. JSR-310 tackles this by providinga comprehensive set of date and time classes suitable for Java SEtoday. The specification includes: Date and Time; Date without Time;Time without Date; Offset from UTC; Time Zone; Durations; Periods;Formatting and Parsing; A selection of calendar systems...Design Goals for JSR-310: (1) Immutable - The JSR-310 classes shouldbe immutable wherever possible. Experience over time has shown thatAPIs at this level should consist of simple immutable objects. Theseare simple to use, can be easily shared, are inherently thread-safe,friendly to the garbage collector and tend to have fewer bugs due tothe limited state-space. (2) Fluent API - The API strives to be fluentwithin the standard patterns of Java SE. A fluent API has methodsthat are easy to read and understand, specifically when chainedtogether. The key goal here is to simplify the use and enhance thereadability of the API. (3) Clear, explicit and expected - Eachmethod in the API should be well-defined and clear in what it does.This isn't just a question of good javadoc, but also of ensuring thatthe method can be called in isolation successfully and meaningfully.(4) Extensible - The API should be extensible in well defined waysby application developers, not just JSR authors. The reasoning issimple - there are just far too many weird and wonderful ways tomanipulate time. A JSR cannot capture all of them, but an extensibleJSR design can allow for them to be added as required by applicationdevelopers or open source projects..." also the InfoQueue article by Alex Blewitt and Charles Humble:

W3C XML Security Working Group Releases Four Working Drafts for Review

Members of the W3C XML Security Working Group have published four WorkingDraft specifications for public review. This WG, along with the W3C WebSecurity Context Working Group, is part of the W3C XML Security Activity,and is chartered to to take the next step in developing the XML securityspecifications."XML Encryption Syntax and Processing Version 1.1" specifies "a processfor encrypting data and representing the result in XML. The data may bein a variety of formats, including octet streams and other unstructureddata, or structure data formats such as XML documents, an XML element,or XML element content. The result of encrypting data is an XML Encryptionelement which contains or references the cipher data""XML Security Algorithm Cross-Reference" is a W3C Note which "summarizesXML Security algorithm URI identifiers and the specifications associatedwith them. The various XML Security specifications have defined a numberof algorithms of various types, while allowing and expecting additionalalgorithms to be defined later. Over time, these identifiers have beendefined in a number of different specifications, including XML Signature,XML Encryption, RFCs and elsewhere. This makes it difficult for usersof the XML Security specifications to know whether and where a URI foran algorithm of interest has been defined, and can lead to the use ofincorrect URIs. The purpose of this Note is to collect the various knownURIs at the time of its publication and indicate the specifications inwhich they are defined in order to avoid confusion and errors... The noteindicates explicitly whether an algorithm is mandatory or recommended inother specifications. If nothing is said, then readers should assumethat support for the algorithms given is optional."The "XML Security Generic Hybrid Ciphers" Working Draft "augments XMLEncryption Version 1.1 by defining algorithms, XML types and elementsnecessary to enable use of generic hybrid ciphers in XML Securityapplications. Generic hybrid ciphers allow for a consistent treatmentof asymmetric ciphers when encrypting data and consist of a keyencapsulation algorithm with associated parameters and a dataencapsulation algorithm with associated parameters." Fourth, "XMLSecurity RELAX NG Schemas" serves to publish RELAX NG schemas for XMLSecurity specifications, including XML Signature 1.1 and XML SignatureProperties. also the W3C Web Security Context WG and XML Security WG:

Wednesday, March 17, 2010

Document Format Standards and Patents

Alex Brown, Blog
This post is part of an ongoing series. It expands on Item 9 of 'ReformingStandardisation in JTC 1', which proposed Ten Recommendations for Reform,and Item 9 was "Clarify intellectual property policies: InternationalStandards must have clearly stated IP policies, and avoid unacceptablepatent encumbrances."
Historically, patents have been a fraught topic with an uneasy co-existencewith standards. Perhaps (within JTC 1) one of the most notorious recentexamples surrounded the JPEG Standard and, in part prompted by suchproblems there are certainly many people of good will wanting bettermanagement of IP in standards. Judging by some recent development indocument format standardisation, it seems probable that this will be thearea where progress can next be made...
The Myth of Unencumbered Technology: Given the situation we are evidentlyin, it is clear that no technology is safe. The brazen claims ofcorporations, the lack of diligence by the US Patent Office, and thecapriciousness of courts means that any technology, at any time, maysuddenly become patent encumbered. Technical people - being logical andreasonable - often make the mistake of thinking the system is bound bylogic and reason; they assume that because they can see 'obvious' priorart, then it will apply; however as the case of the i4i patent vividlyillustrates, this is simply not so.
While the "broken stack" of patents is beyond repair by any singlestandards body, at the very least the correct application of the rulescan make the situation for users of document format standards moretransparent and certain. In the interests of making progess in thisdirection, it seems a number of points need addressing now. (1) Usersshould be aware that the various covenants and promises being pointed-toby the US vendors need not be relevant to them as regards standards use.Done properly, International Standardization can give a clearer andstronger guarantee of license availability -- without the caveats,interpretable points and exit strategies these vendors' documentsinvariably have. (2) In particular it should be of concern to NBs thatthere is no entry in JTC 1's patent database for OOXML (there is forDIS 29500, its precursor text, a ZRAND promise from Microsoft); thereis no entry whatsoever for ODF... (3) In the case of the i4i patent,one implementer has already commented that implementing CustomXML inits entirety may run the risk of infringement -- and this is probably,after all, why Microsoft patched Word in the field to remove someaspects of its CustomXML support).... (4) When declaring their patentsto JTC 1, patent holders are given an option whether to make a generaldeclaration about the patents that apply to a standard, or to make aparticular declaration about each and every itemized patent whichapplies. I believe NBs should be insisting that patent holder enumerateprecisely the patents they hold which they claim apply.. There isobviously much to do, and I am hoping that at the forthcoming SC 34meetings in Stockholm this work can begin... also article Part 1:

Consensus Emerges for Key Web Application Standard

"Browser makers, grappling with outmoded technology and a vision torebuild the Web as a foundation for applications, have begun convergingon a seemingly basic by very important element of cloud computing. Thatability is called local storage, and the new mechanism is calledIndexed DB. Indexed DB, proposed by Oracle and initially calledWebSimpleDB, is largely just a prototype at this stage, not somethingWeb programmers can use yet. But already it's won endorsements fromMicrosoft, Mozilla, and Google, and together, Internet Explorer, Firefox,and Chrome account for more than 90 percent of the usage on the Net today.
Standardization could come: advocates have worked Indexed DB into theconsiderations of the W3C, the World Wide Web Consortium thatstandardizes HTML and other Web technologies. In the W3C discussions,Indexed DB got a warm reception from Opera, the fifth-ranked browser.
It may sound perverse, but the ability to store data locally on a computerturns out to be a very important part of the Web application era that'sreally just getting under way. The whole idea behind cloud computing isto put applications on the network, liberating them from being tied toa particular computer, but it turns out that the computer still matters,because the network is neither fast nor ubiquitous. Local storage letsWeb programmers save data onto computers where it's convenient forprocessors to access. That can mean, for example, that some aspects ofGmail and Google Docs can work while you're disconnected from thenetwork. It also lets data be cached on the computer for quick accesslater. The overall state of the Web application is maintained on theserver, but stashing data locally can make cloud computing faster andmore reliable..."
An editor's draft of the W3C specification "Indexed Database API" isavailable online: " User agents need to store large numbers of objectslocally in order to satisfy off-line data requirements of Web applications.'Webs Storage' [10-September-2009 WD] is useful for storing pairs ofkeys and their corresponding values. However, it does not provide in-orderretrieval of keys, efficient searching over values, or storage ofduplicate values for a key. This specification provides a concrete APIto perform advanced key-value data management that is at the heart ofmost sophisticated query processors. It does so by using transactionaldatabases to store keys and their corresponding values (one or moreper key), and providing a means of traversing keys in a deterministicorder. This is often implemented through the use of persistent B-treedata structures that are considered efficient for insertion and deletionas well as in-order traversal of very large numbers of data records. also the latest editor's version for Indexed Database API:

IETF First Draft for Codec Requirements

Members of the IETF Internet Wideband Audio Codec (CODEC) Working Grouphave released an initial level -00 Internet Draft specification for"Codec Requirements." Additional discussion (development process,evaluation, requirements conformance, intellectual property issues) isprovided in the draft for "Guidelines for the Codec Development Withinthe IETF." The IETF CODEC Working Group was formed recently to "toensure the existence of a single high-quality audio codec that isoptimized for use over the Internet and that can be widely implementedand easily distributed among application developers, service operators,and end users."
"According to reports from developers of Internet audio applicationsand operators of Internet audio services, there are no standardized,high-quality audio codecs that meet all of the following three conditions:(1) Are optimized for use in interactive Internet applications. (2) Arepublished by a recognized standards development organization (SDO) andtherefore subject to clear change control. (3) Can be widely implementedand easily distributed among application developers, service operators,and end users. According to application developers and service operators,an audio codec that meets all three of these would: enable protocoldesigners to more easily specify a mandatory-to-implement codec intheir protocols and thus improve interoperability; enable developersto more easily easily build innovative, interactive applications forthe Internet; enable service operators to more easily deploy affordable,high-quality audio services on the Internet; and enable end users ofInternet applications and services to enjoy an improved user experience.
The "Codec Requirements" specification provides requirements for an audiocodec designed specifically for use over the Internet. The requirementsattempt to address the needs of the most common Internet interactiveaudio transmission applications and to ensure good quality whenoperating in conditions that are typical for the Internet. Theserequirements address the quality, sampling rate, delay, bit-rate, andpacket loss robustness. Other desirable codec properties are consideredas well...
In-scope applications include: (1) Point to point calls -- where pointto point calls are voice over IP (VoIP) calls from two "standard" (fixedor mobile) phones, and implemented in hardware or software. (2)Conferencing, where conferencing applications that support multi-partycalls have additional requirements on top of the requirements forpoint-to-point calls; conferencing systems often have higher-fidelityaudio equipment and have greater network bandwidth available -- especiallywhen video transmission is involved. (3) Telepresence, where mosttelepresence applications can be considered to be essentially veryhigh-quality video-conferencing environments, so all of the conferencingrequirements also apply to telepresence. (4) Teleoperation, whereteleoperation applications are similar to telepresence, with theexception that they involve remote physical interactions. (5) In-gamevoice chat, where the requirements are similar to those of conferencing,with the main difference being that narrowband compatibility is notnecessary. (6) Live distributed music performances / Internet musiclessons, and other applications, where live music requires extremelylow end-to-end delay and is one of the most demanding application forinteractive audio transmission. also the IETF Internet Wideband Audio Codec (CODEC) Working Group Charter:

Don't Look Down: The Path to Cloud Computing is Still Missing a Few Steps

This article narrates how government agencies are seeking to navigateissues of interoperability, data migrations, security, and standards inthe context of Cloud Computing. The government defines cloud computingas an on-demand model for network access, allowing users to tap into ashared pool of configurable computing resources, such as applications,networks, servers, storage and services, that can be rapidly provisionedand released with minimal management effort or service-provider interaction.
Momentum for cloud computing has been building during the past year,after the new [U.S.] administration trumpeted the approach as a way toderive greater efficiency and cost savings from information technologyinvestments. But the journey to cloud computing infrastructures willtake a few more years to unfold, federal CIOs and industry experts say.Issues of data portability among different cloud services, migration ofexisting data, security and the definition of standards for all of thoseareas are the missing rungs on the ladder to the clouds.
The Federal Cloud Computing Security Working Group, an interagencyinitiative, is working to develop the Government-Wide AuthorizationProgram (GAP), which will establish a standard set of security controlsand a common certification and accreditation program that will validatecloud computing providers...Cloud vendors need to implement multipleagency policies, which can translate into duplicative risk managementprocesses and lead to inconsistent application of federal securityrequirements.
At the user level, there are challenges associated with access controland identity management,according to Doug Bourgeois, director of theInterior Department's National Business Center.. Organizations mustextend their existing identity, access management, audit and monitoringstrategies into the cloud. However, the problem is that existingenterprise systems might not easily integrate with the cloud... An agencycannot transfer data from a public cloud provider, such as Amazon orGoogle, and put it in an infrastructure-as-a-service platform that aprivate cloud provider develops for the agency and then exchange thatdata with another type of cloud provider; that type of data transfer isdifficult because there are no overarching standards for operating in ahybrid environment...

Implementing User Agent Accessibility Guidelines (UAAG) 2.0

James Allan, Kelly Ford, Jeanne Spellman (eds), W3C Technical Report
Members of the W3C User Agent Accessibility Guidelines Working Grouphave published a First Public Working Draft for "Implementing UAAG 2.0:A Guide to Understanding and Implementing User Agent AccessibilityGuidelines 2.0" and an updated version of of the "User AgentAccessibility Guidelines (UAAG) 2.0" specification. Comments on thetwo documents should be sent to the W3C public list by 16-April-2010.
The "User Agent Accessibility Guidelines (UAAG) 2.0" specification ispart of a series of accessibility guidelines published by the W3C WebAccessibility Initiative (WAI). It provides guidelines for designinguser agents that lower barriers to Web accessibility for people withdisabilities. User agents include browsers and other types of softwarethat retrieve and render Web content. A user agent that conforms tothese guidelines will promote accessibility through its own userinterface and through other internal facilities, including its abilityto communicate with other technologies (especially assistive technologies).Furthermore, all users, not just users with disabilities, should findconforming user agents to be more usable.
In addition to helping developers of browsers and media players, thedocument will also benefit developers of assistive technologies becauseit explains what types of information and control an assistive technologymay expect from a conforming user agent. Technologies not addresseddirectly by this document (e.g., technologies for braille rendering)will be essential to ensuring Web access for some users with disabilities.
The Working Draft for "Implementing UAAG 2.0" provides supportinginformation for the User Agent Accessibility Guidelines (UAAG) 2.0. Thedocument provides explanation of the intent of UAAG 2.0 success criteria,examples of implementation of the guidelines, best practice recommendationsand additional resources for the guideline. It includes a new sectionsupporting the definition of a user agent. also the updated UAAG 2.0 specification:

IETF Internet Draft: Requirements for End-to-End Encryption in XMPP

Members of the IETF Extensible Messaging and Presence Protocol (XMPP)Working Group have published an Internet Draft specifying "Requirementsfor End-to-End Encryption in the Extensible Messaging and PresenceProtocol (XMPP)." The Extensible Messaging and Presence Protocol isan open technology for real-time communication, which powers a widerange of applications including instant messaging, presence, multi-partychat, voice and video calls, collaboration, lightweight middleware,content syndication, and generalized routing of XML data.
XMPP technologies are typically deployed using a client-serverarchitecture. As a result, XMPP endpoints (often but not alwayscontrolled by human users) need to communicate through one or moreservers. For example, the user 'juliet@capulet.lit' connects to the'capulet.lit' server and the user 'romeo@montague.lit' connects to the'montague.lit' server, but in order for Juliet to send a message toRomeo the message will be routed over her client-to-server connectionwith capulet.lit, over a server-to-server connection between'capulet.lit' and 'montague.lit', and over Romeo's client-to-serverconnection with montague.lit. Although the XMPP-CORE specificationrequires support for Transport Layer Security to make it possible toencrypt all of these connections, when XMPP is deployed any of theseconnections might be unencrypted. Furthermore, even if theserver-to-server connection is encrypted and both of theclient-to-server connections are encrypted, the message would stillbe in the clear while processed by both the 'capulet.lit' and'montague.lit' servers.
Thus, end-to-end ('e2e') encryption of traffic sent over XMPP is adesirable goal. Since 1999, the Jabber/XMPP developer community hasexperimented with several such technologies, including OpenPGP, S/MIME,and encrypted sessions. More recently, the community has explored thepossibility of using Transport Layer Security (TLS) as the basetechnology for e2e encryption. In order to provide a foundation fordeciding on a sustainable approach to e2e encryption, this documentspecifies a set of requirements that the ideal technology would meet.
This specification primarily addresses communications security('commsec') between two parties, especially confidentiality, dataintegrity, and peer entity authentication. Communications security canbe subject to a variety of attacks, which RFC 3552 divides into passiveand active categories. In a passive attack, information is leaked(e.g., a passive attacker could read all of the messages that Julietsends to Romeo). In an active attack, the attacker can add, modify,or delete messages between the parties, thus disrupting communications...Ideally, any technology for end-to-end encryption in XMPP could beextended to cover any of: One-to-one communication sessions betweentwo 'online' entities, One-to-one messages that are not transferredin real time, One-to-many information broadcast, Many-to-manycommunication sessions among more than two entities. However, bothone-to-many broadcast and many-to-many sessions are deemed out-of-scopefor this document, and this document puts more weight on one-to-onecommunication sessions..." also Cryptographic Key Management:

Sunday, March 14, 2010

StoneGate SSL VPN Virtual Solution Supports OVF, SAML 2.0, and ADFS

"Stonesoft has introduced three new products designed to provide securemobile and remote access. This includes the new StoneGate SSL VPNVirtual solution, StoneGate SSL VPN 1.4 and StoneGate SSL-1060. TheStoneGate SSL VPN Virtual solution is based on the Open Virtual Format(OVF) standard and provides multiple features that meet the needs ofthese environments, such as strong authentication, a flexible applicationportal and support for Federation ID standards such as SAML 2.0 andADFS. The StoneGate SSL VPN Virtual Appliance is compatible with bothVMware's ESX/ESXi 3.5 and 4.0 (vSphere) versions.

The StoneGate SSL VPN Virtual solution complements the company'sStoneGate Virtual Firewall and Virtual IPS solutions for virtual andcloud computing environments. The new solution allows rapid deploymentand implementation of secure mobile access to cloud computing.

As cloud computing becomes more prevalent in corporate business andvirtualized data centers, there is a stronger need for secure accessto corporate applications in the cloud. The StoneGate SSL VPN Virtualsolution combines the need for granular access to the corporate weband legacy applications with the secure and authenticated profilingof users

The StoneGate SSL VPN 1.4 offers organizations enhanced securityprovided with integrated mobile authentication methods, granular accesscontrol and a holistic view of access rights within a single integratedaccess policy. Additionally, the appliance provides easy managementand administration of access control for all network users.Administrators can easily select the parameters, or a combination ofparameters, that will grant or deny the access to applications. Thisincludes sophisticated assessment and trace removal techniques toensure that corporate security standards are enforced at all timesfor mobile and roaming users..." also the OASIS SAML TC:

Introduction to Pyjamas: Exploit the Synergy of GWT and Python

Pyjamas is a cool tool, or framework, for developing AsynchronousJavaScript and XML (Ajax) applications in Python. It's a versatiletool that you can use to write comprehensive applications withoutwriting any JavaScript code. This series examines the myriad aspectsof Pyjamas, and this first article explores Pyjamas's background andbasic elements.

Google's Web Toolkit (GWT) lets you develop a Rich Internet Application(RIA) with Ajax, entirely in Java code. You can use the rich Java toolset(IDEs, refactoring, code completion, debuggers, and so on) to developapplications that can be deployed on all major Web browsers. With GWTyou can write applications that behave like desktop applications butrun in the browser. Pyjamas, a GWT port, is a tool and framework fordeveloping Ajax applications in Python.

WebKit, XUL, and their ilk bring modern flair to desktop applications.Pyjamas brings WebKit to Python developers. With Webkit, Pyjamas becomesa cross-browser and cross-platform set of GUI widgets. You can developwidgets that will run anywhere WebKit and XUL run. The Pyjamas API-basedapplication can live anywhere GWT applications would live. Plus, Pyjamaslets you write desktop applications built on top of WebKit and XUL.This is preferable to building applications on top of Qt or GTK becauseWebKit supports CSS, and it is used in many other places for reliablerendering (iPhone, Safari, Android, and so on).

With Pyjamas you create containers, then add widgets to the containers.The widgets can be labels, text fields, buttons, and so forth. Widgets,like buttons, have event handlers so you can listen for click eventsfrom the button..."

Dissecting the Consortium: A Uniquely Flexible Platform for Collaboration

Andrew Updegrove, Standards Today Bulletin

"The opportunities and imperatives for collaborative action of all kindsamong both for-profit and non-profit entities are growing as the worldbecomes more interconnected and problem solving becomes less susceptibleto unilateral action. Those activities include research and development,information acquisition and sharing, group purchasing, open sourcesoftware and content creation, applying for government grant funding,and much more.
At the same time, the rapid spread of Internet and Web accessibility allowscollaborative activities to be undertaken more easily, and among morewidely distributed participants, than has ever been possible before. Butwhile the technology enabling collaboration has become ubiquitous,hard-won knowledge regarding best practices, successful governancestructures, and appropriate legal frameworks for forming and managingsuccessful collaborative activities has yet to be widely shared. As aresult, those wishing to launch new collaborative projects may havedifficulty finding reliable guidance in order to create structuresappropriate to support their activities.
In this article, I provide a list of attributes that define and functionsthat are common to consortia, an overview of how their activities aretypically staffed and supported, a comparative taxonomy of the existinglegal/governance structures that have been created to address them, andan overview of the legal concerns which consortium founders need toaddress...
Multiple forces in the world today are converging to increase the easeand raise the value of collaboration in both the public and privatesectors. Indeed, it is becoming increasingly common in business literatureto find the opinion expressed that companies that fail to collaboratewith their peers will be at a severe disadvantage to their more-willingcompetitors. In light of such opportunities, it is important for thefounders of new collaborative projects, and their legal counsel, to befamiliar with the types of frameworks available to serve as platformsfor their endeavors, and to choose wisely before launching theirinitiatives. Happily, the consortium model, in all of its variations,provides a uniquely flexible and appropriate foundation upon which thecollaborations of the future can be based..."

Expressing SNMP SMI Datatypes in XML Schema Definition Language

Mark Ellison and Bob Natale (eds), IETF Internet Draft

Members of the IETF Operations and Management Area Working Group WorkingGroup have published a revised Internet Draft for "Expressing SNMP SMIDatatypes in XML Schema Definition Language." The memo defines the IETFstandard expression of Structure of Management Information (SMI) basedatatypes in Extensible Markup Language (XML) Schema Definition (XSD)language. The primary objective of this memo is to enable the productionof XML documents that are as faithful to the SMI as possible, using XSDas the validation mechanism.

Background: "Numerous use cases exist for expressing the managementinformation described by SMI Management Information Base (MIB) modulesin XML. Potential use cases reside both outside and within the traditionalIETF network management community. For example, developers of someXML-based management applications may want to incorporate the rich setof data models provided by MIB modules. Developers of other XML-basedmanagement applications may want to access MIB module instrumentationvia gateways to SNMP agents. Such applications benefit from the IETFstandard mapping of SMI datatypes to XML datatypes via XSD.

MIB modules use SMIv2 (RFC 2578) to describe data models. For legacyMIB modules, SMIv1 (RFC 1155) was used. MIB data conveyed in variablebindings ('varbinds') within protocol data units (PDUs) of SNMP messagesuse the primitive, base datatypes defined by the SMI. The SMI allowsfor the creation of derivative datatypes, 'textual conventions' ('TCs').A TC has a unique name, has a syntax that either refines or is a baseSMI datatype and has relatively precise application-level semantics.TCs facilitate correct application-level handling of MIB data, improvereadability of MIB modules by humans and support appropriate renderingsof MIB data.

Values in varbinds corresponding to MIB objects defined with TC syntaxare always encoded as the base SMI datatype underlying the TC syntax.Thus, the XSD mappings defined in this memo provide support for valuesof MIB objects defined with TC syntax as well as for values of MIB objectsdefined with base SMI syntax. Various independent schemes have beendevised for expressing SMI datatypes in XSD. These schemes exhibit adegree of commonality, especially concerning numeric SMI datatypes, butthese schemes also exhibit sufficient differences, especially concerningthe non-numeric SMI datatypes, precluding uniformity of expression andgeneral interoperability..." also the IETF Operations and Management Area Working Group WG Status Pages:

Proposed Recommendation Call for Review: XProc - An XML Pipeline Language

Norman Walsh, Alex Milowski, Henry S. Thompson (eds), W3C PR

The W3C XML Processing Model Working Group has published a ProposedRecommendation for "XProc - An XML Pipeline Language", together with an"Implementation Report for XProc: An XML Pipeline Language." Given thatthe changes to this draft do not affect the validity of that earlierimplementation feedback, except in specific areas also now covered bymore recent implementation feedback, the Working Group is now publishingthis version as a Proposed Recommendation. The review period ends on15-April-2010; members of the public are invited to send comments onthis Proposed Recommendation to the 'public-xml-processing-model-comments'mailing list.

An XML Pipeline specifies a sequence of operations to be performed on acollection of XML input documents. Pipelines take zero or more XMLdocuments as their input and produce zero or more XML documents as theiroutput.

A pipeline consists of steps. Like pipelines, steps take zero or more XMLdocuments as their inputs and produce zero or more XML documents as theiroutputs. The inputs of a step come from the web, from the pipelinedocument, from the inputs to the pipeline itself, or from the outputs ofother steps in the pipeline. The outputs from a step are consumed byother steps, are outputs of the pipeline as a whole, or are discarded.There are three kinds of steps: atomic steps, compound steps, andmulti-container steps. Atomic steps carry out single operations and haveno substructure as far as the pipeline is concerned. Compound steps andmulti-container steps control the execution of other steps, which theyinclude in the form of one or more subpipelines.

The result of evaluating a pipeline (or subpipeline) is the result ofevaluating the steps that it contains, in an order consistent with theconnections between them. A pipeline must behave as if it evaluated eachstep each time it is encountered. Unless otherwise indicated,implementations must not assume that steps are functional (that is, thattheir outputs depend only on their inputs, options, and parameters) orside-effect free. The pattern of connections between steps will notalways completely determine their order of evaluation. The evaluationorder of steps not connected to one another is implementation-dependent...A typical step has zero or more inputs, from which it receives XMLdocuments to process, zero or more outputs, to which it sends XMLdocument results, and can have options and/or parameters. An atomicstep is a step that performs a unit of XML processing, such as XIncludeor transformation, and has no internal subpipeline. ] Atomic steps carryout fundamental XML operations and can perform arbitrary amounts ofcomputation, but they are indivisible. An XSLT step, for example,performs XSLT processing; a Validate with XML Schema step validates oneinput with respect to some set of XML Schemas, etc..." also the XProc Implementation Report:

OASIS Blue Member Section: Open Standards for Smart Energy Grids

OASIS has announced the formation of a new Member Section, OASIS Blue,which will bring together a variety of open standards projects relatedto energy, intelligent buildings, and natural resources. OASIS Blue willleverage the innovation of existing electronic commerce standards andthe power of the Internet to achieve meaningful sustainability. Aninternational effort, OASIS Blue incorporates work that has identifiedas a central deliverable for the U.S. government's strategic Smart Gridinitiative. OASIS Blue welcomes suggestions for forming new Committeesrelated to its mission.
The collaboration incoudes IBM, Constellation NewEnergy, CPower, EnerNOC,Grid Net, HP, NeuStar, TIBCO, U.S. Department of Defense, U.S. NationalInstitute of Standards and Technology (NIST), and others.
Several Technical Committees will coordinate efforts under OASIS Blue.The Energy Interoperation Technical Committee defines standards for thecollaborative and transactive use of energy within demand response anddistributed energy resources. The Energy Market Information Exchange(eMIX) Technical Committee works on exchanging pricing information andproduct definitions in energy markets. The Open Building InformationExchange (oBIX) Technical Committee enables mechanical and electricalcontrol systems in buildings to communicate with enterprise applications.Members of the oBIX TC plan to use the WS-Calendar specification tocoordinate control system performance expectations with enterprise andsmart grid activities.
David Chassin of Pacific Northwest National Laboratory, chair of theOASIS Blue Steering Committee: "OASIS Blue provides a safe, neutralenvironment where stakeholders can cooperate to define clear taxonomiesand information-sharing protocols that will be recognized by theinternational standards community." Other OASIS Blue Steering Committeemembers include Steven Bushby of NIST, Bob Dolin of Echelon, Rik Drummondof the Drummond Group, Girish Ghatikar of Lawrence Berkeley NationalLaboratory, Francois Jammes of Schneider Electric, Arnaud Martens ofBelgian SPF Finances, Dana K. "Deke" Smith of buildingSMART alliance,and Jane L. Snowdon, Ph.D., of IBM.

IETF Internet Draft: Security Requirements for HTTP

Jeff Hodges and Barry Leiba (eds), IETF Internet Draft

An updated version of the IETF Informational Internet Draft has beenpublished docmenting "Security Requirements for HTTP." Recent InternetEngineering Steering Group (IESG) practice dictates that IETF protocolsmust specify mandatory-to-implement (MTI) security mechanisms, so thatall conformant implementations share a common baseline. This documentexamines all widely deployed HTTP security technologies, and analyzesthe trade-offs of each. The document examines the effects of applyingsecurity constraints to Web applications, documents the properties thatresult from each method, and will make Best Current Practicerecommendations for HTTP security in a later document version.
Some existing HTTP Security Mechanisms include: Forms And Cookies, HTTPAccess Authentication [Basic Authentication, Digest Authentication,Authentication Using Certificates in TLS, Other Access AuthenticationSchemes, Centrally-Issued Tickets, and Web Services security mechanisms.In addition to using TLS for client and/or server authentication, itis also very commonly used to protect the confidentiality and integrityof the HTTP session. For instance, both HTTP Basic authentication andCookies are often protected against snooping by TLS. It should be notedthat, in that case, TLS does not protect against a breach of thecredential store at the server or against a keylogger or phishinginterface at the client. TLS does not change the fact that BasicAuthentication passwords are reusable and does not address that weakness.
Is is possible that HTTP will be revised in the future. "HTTP/1.1" (RFC2616) and "Use and Interpretation of HTTP Version Numbers" (RFC 2145)define conformance requirements in relation to version numbers. In HTTP1.1, all authentication mechanisms are optional, and no single transportsubstrate is specified. Any HTTP revision that adds a mandatory securitymechanism or transport substrate will have to increment the HTTP versionnumber appropriately. All widely used schemes are non-standard and/orproprietary..."

Elastic Provisioning in the Cloud: Terracotta and Eucalyptus Integration

"Terracotta recently announced a partnership with open source privatecloud platform vendor Eucalyptus that allows the companies to provisionprivate clouds on the Amazon AWS-compatible Eucalyptus cloud platformand take advantage of the elasticity and flexibility of the cloud.
Eucalyptus is compatible with the Amazon AWS public cloud infrastructureand its design gives users the option of moving applications fromon-premise Eucalyptus clouds to public clouds, and vice versa. It alsosupports 'hybrid' clouds allowing a composite of private (generally usedto store private data) and public (provided by cloud service providersto offer customers the ability to deploy and consume services) cloudresources together to get the benefits of both deployment models. Byaddressing the data layer and to provision elastic cloud resourceswithin internal infrastructure, Eucalyptus and Terracotta integrationgives the organizations a way to build private clouds using commodityhardware and the virtualization technology."
Excerpts from comments of Ari Zilka (Terracotta) and Rich Wolsk(Eucalyptus): "We have both seen the need for the combined feature setwe offer in a number of customer engagements, so working together madea lot of sense. Eucalyptus provides the provisioning and managementframework for building and operating private clouds, and Terracottaensures application data can elastically scale to meet the demands ofthis dynamically configured compute tier. The products are verycomplementary...
Developers using Eucalyptus as a cloud platform can immediately use theTerracotta scaling and caching frameworks to quickly build scalableweb sites and Java applications, and deploy those applications to eitherEucalyptus or to Amazon AWS. Developers already using Terracotta inAmazon's cloud can bring those applications and sites into aEucalyptus-managed an on-premise cloud within their own data center..."

Anatomy of an Open Source Cloud: Infrastructure as a Service

Cloud computing is no longer a technology on the cusp of breaking outbut a valuable and important technology that is fundamentally changingthe way we use and develop applications. Volumes could be written aboutthe leadership role that open source is playing in the cloud andvirtualization domain, but this article provided a short introductionto some of the popular and visible solutions available today. Whetheryou're looking to build a cloud based on your own requirements fromindividual pieces or simply want a cohesive solution that works out ofthe box, open source has you covered.
This article begins with an exploration of the core abstractions ofcloud architectures (from Infrastructure as a Service IaaS), thenmoves beyond the building blocks to the more highly integratedsolutions. Although not a requirement, virtualization provides uniquebenefits for building dynamically scalable architectures. In additionto scalability, virtualization introduces the ability to migrate virtualmachines (VMs) between physical servers for the purposes of loadbalancing. The virtualization component is provided by a layer ofsoftware called a hypervisor (sometimes called a virtual machinemonitor - VMM). This layer provides the ability to execute multipleoperating systems (and their applications) simultaneously on a singlephysical machine. On the hypervisor is an object called a virtual machinethat encapsulates the operating system, applications, and configuration.Optionally, device emulation can be provided in the hypervisor or as aVM. Finally, given the new dynamic nature of virtualization and the newcapabilities it provides, new management schemes are needed. Thismanagement is best done in layers, considering local management at theserver, as well as higher-level infrastructure management, providingthe overall orchestration of the virtual environment.
As VMs are an aggregation of operating system, root file system, andconfiguration, the space is ripe for tool development. But to realizethe full potential of VMs and tools, there must be a portable way toassemble them. The current approach, called the Open VirtualizationFormat (OVF) is a VM construction that is flexible, efficient, andportable. OVF wraps a virtual disk image in an XML wrapper that definesthe configuration of the VM, including networking configuration,processor and memory requirements, and a variety of extensible metadatato further define the image and its platform needs. The key capabilityprovided by OVF is the portability to distribute VMs in ahypervisor-agnostic manner..."

W3C Launches Decisions and Decision-Making Incubator Group

"W3C is pleased to announce the creation of the Decisions and Decision-Making Incubator Group. The mission of this XG is to determine therequirements, use cases, and a representation of decisions anddecision-making in a collaborative and networked environment suitablefor leading to a potential standard for decision exchange, sharedsituational awareness, and measurement of the speed, effectiveness,and human factors of decision-making. Incubator Activity work is noton the W3C standards track but in many cases serves as a starting pointfor a future Working Group. The following W3C Members have sponsoredthe charter for this group: DISA, MITRE, and CNR. Jeff Waters and DonMcGarry are the initial Decisions and Decision-Making Incubator Groupco-Chairs.
Background: "Everyone makes important decisions in the dailyaccomplishment of their duties. The aggregate of these decisionsconstitutes the current state of their organization, and charts thecourse for our future direction and progress. In a sense, ourdecisions represent individuals and the organizations they represent.The effective representation, management, evaluation, and sharing ofthese decisions determines the success of the enterprise. Especiallyin a distributed, self-organizing, networked environment where digitalmedia are the main interaction between members, distribution andtracking of decisions is particularly important for understandingwhat others are doing. Our decisions serve as information-work products;both as inputs and outputs. We use others decisions as references andour decisions become references to the decision process of others.The significant time and effort we spend converting our decisionsinto work products such as briefs, papers, proposals, and communicationof our decisions in meetings, teleconferences, conversations, andemails, could be recaptured if we had a standard concise format forrepresenting and sharing our decisions.
For these reasons, the members of the Decisions and Decision-MakingIncubator are exploring and determining the requirements, use cases,and a potential standard format for representing our decisionsefficiently and effectively in a collaborative networked environmentfor the purpose of information exchange for situational awareness.The Emergency Data Exchange Language Common Alerting Protocol (EDXL-CAP)family of standards is an example of the type and style of informationexchange formats which are simple, useful, and understandable. WhatEDXL-CAP did for alerts, a common decision exchange protocol shoulddo for decisions. However, to reach its full potential, the proposeddecision format must be extended by the Semantic Web tools andstandards to provide semantic interoperability and to provide a basisfor reasoning that can ease development of advanced applications.Simplicity and understandability of decisions is particularly importantin distributed, dynamic settings such as emergency management.
The group will maintain a wiki site containing relevant information.The deliverables will be a final report, a potential standard ontology,examples, and potentially prototype tools using the ontology. Thevision of the final report outline includes: introduction, backgroundand need, scope, use cases, requirements, issues and challenges,ontological patterns & solutions, sample decision ontology, representationformats, examples, candidate tools for instrumentation, examples,recommendations, and conclusion. In case the group decides that aparticular technology is ripe for further standardization at the W3C,the group will consider preparing a W3C member submission and/or proposea W3C group charter to be considered by the W3C.

Friday, March 12, 2010

OGC Announces Earth Observation Profile for Web-based Catalogue Services

Open Geospatial Consortium Announcement

The Open Geospatial Consortium has announced adoption and availability
of the "OGC Catalogue Services Standard Extension Package for ebRIM
Application Profile: Earth Observation Products", and also the related
"Geography Markup Language (GML) Application Schema for EO Products."
Together, these standards, when implemented, will enable more efficient
data publishing and discovery for a wide range of stakeholders who
provide and use data generated by satellite-borne and aerial radar,
optical and atmospheric sensors.

The OASIS standard ebRIM (Electronic business Registry Information Model)
is the preferred cataloguing metamodel foundation for application
profiles of the OpenGIS Catalogue Service Web (CS-W) Standard. The CS-W
ebRIM EO standard describes a set of interfaces, bindings and encodings
to be implemented in catalog servers so that data providers can publish
descriptive information (metadata) about Earth Observation data. Developers
can also implement this standard as part of Web clients that will enable
data users and their applications to very efficiently search and exploit
these collections of Earth Observation data.

"OGC Catalogue Services Standard 2.0 Extension Package for ebRIM
Application Profile: Earth Observation Products" (OGC 06-131r6) is an
OGC Implementation Standard of subtype Application Profile. This
application profile standard "describes the interfaces, bindings and
encodings required to discover, search and present metadata from
catalogues of Earth Observation products. The profile presents a
minimum specification for catalogue interoperability within the EO
domain, with extensions for specific classes of data. It enables
CSW-ebRIM catalogues to handle a variety of metadata pertaining to
earth observation, like EO Products... EO data product collections
are usually structured to describe data products derived from a single
sensor onboard a satellite or series of satellites. Products from
different classes of sensors usually require specific product metadata.
The following classes of products have been identified so far: radar,
optical, atmospheric."

"OpenGIS Geography Markup Language (GML) Application Schema for Earth
Observation Products" (0GC 06-080r4) is an OGC Implementation Standard
of subtype GML Application Schema. It "describes the encodings required
to describe Earth Observation (EO) products from general to mission
specific characteristics. The approach consists in modelling EO data
product through a GML application schema.ISO definitions are
specified for attributes where available, although not the full ISO
schema is used for the structural definitions, which would lead to a
less efficient overall structure. The general mechanism is to create
a schema with a dedicated namespace for each level of specificity
from a general description which is common to each EO Product to a
restricted description for specific mission EO Product. Each level
of specificity is an extension of the previous one..."

IETF Publishes xCal: The XML Format for iCalendar

Cyrus Daboo, Mike Douglass, and Steven Lees (eds), IETF Internet Draft

An IETF Internet Draft previously published under the title "iCalendar
XML Representation" has been revised and issued under the new title
"xCal: The XML Format for iCalendar." An added Section 5 (Handling
Link Elements in XML and iCal) now recommends a preferred format for
links in xCal documents, specifies an iCalendar extension for including
links in iCalendar documents, and describes how to convert between the
two formats.

"The iCalendar data format defined in IETF RFC 5545 is a widely deployed
interchange format for calendaring and scheduling data. While many
applications and services consume and generate calendar data, iCalendar
is a specialized format that requires its own parser/generator. In
contrast, XML-based formats are widely used for interoperability between
applications, and the many tools that generate, parse, and manipulate
XML make it easier to work with than iCalendar.

The purpose of this specification is to define 'xCal', an XML format
that allows iCalendar data to be converted to XML, and then back to
iCalendar, without losing any semantic meaning in the data. Anyone
creating XML calendar data according to this specification will know
that their data can be converted to a valid iCalendar representation
as well. Two key design considerations are: (1) Round-tripping
(converting an iCalendar instance to XML and back) will give the same
result as the starting point. (2) Preserving the semantics of the
iCalendar data. While a simple consumer can easily browse the calendar
data in XML, a full understanding of iCalendar is still required in
order to modify and/or fully comprehend the calendar data.

Handling Link Elements in XML and iCal: Both Atom (RFC 4287) and HTML
use a link element to reference external information which is related
in some way to the referencing document. iCalendar (RFC 5545) does not
have such a mechanism. There are several common use cases where it would
be useful for a calendar item to link to external structured data. For
instance, it would be useful for an event item to denote the location
of the event by referencing a vCard. Similarly, there may be a primary
contact person for the event, and that person's vCard should be linked
from the event as well. It is recommended therefore that calendar data
in the xCal format use the Atom link element, as specified in RFC 5545
section 4.2.7, for linking to external related resources. The Relation
Name 'location' designates a location for the referencing item. Typically
the location will be in the form of a vCard, but it could be some other
kind of document containing location information... A LINK extension
property for iCalendar is used to reference external documents related
to this calendar item. The property can appear on any iCalendar component,
and the value of this property is a URI which references an external
resource related to the component. Since the LINK parameter is specified
in terms of the link element defined by RFC 5545, converting between
the two is straightforward. When converting from iCalendar to xCal,
simply take any parameters present and assign their values to the
corresponding attribute on the link element. Any unknown extensions
either in the iCalendar or xCal format MAY be ignored when converting
to the other format..." More Detail

NIST Publishes Open Vulnerability and Assessment Language (OVAL)

John Banghart, Stephen Quinn, David Waltermire (eds), NIST Report

An announcement from Pat O'Reilly of NIST's Computer Security Division
reports on the publication of a Draft NIST Interagency Report (IR)
7669: "Open Vulnerability and Assessment Language (OVAL) Validation
Program Test Requirements." The report defines the requirements and
associated test procedures necessary for products to achieve one or
more Open Vulnerability and Assessment Language (OVAL) Validations.
Validation is awarded based on testing a defined set of OVAL
capabilities by independent laboratories that have been accredited
for OVAL testing by the NIST National Voluntary Laboratory
Accreditation Program (NVLAP).

Open Vulnerability and Assessment Language (OVAL) is an information
security community standard to promote open and publicly available
security content, and to standardize the transfer of this information
across security tools and services. The OVAL Language is an XML
specification for exchanging technical details on how to check systems
for security-related software flaws, configuration issues, and patches.

The OVAL Language standardizes the three main steps of the assessment
process: representing configuration information of systems for testing;
analyzing the system for the presence of the specified machine state
(vulnerability, configuration, patch state, etc.); and reporting the
results of the assessment. In this way, OVAL enables open and publicly
available security content and standardizes the transfer of this content
across the entire spectrum of information security tools and services.
OVAL is maintained by the MITRE Corporation...

The NIST OVAL Validation Program is designed to test the ability of
products to use the features and functionality defined in the OVAL
Language. An information technology (IT) product vendor can obtain
one or more OVAL Validations for a product. These validations are
based on the test requirements defined in this document, which cover
four distinct but related validations based on product functionality..." more detail