Search This Blog

Thursday, March 27, 2008

Jacquard: a Methodology for Web Publishing

This article introduces Jacquard, a software development methodology
specialized for Web projects, and especially for Web development among
diverse teams. Jacquard seeks to align the work and goals of business
interest personnel, Web designers, programmers, project managers,
database analysts, and more. The author discusses the core principles
of Jacquard, and provides an example of its use in communication between
a user experience team and a programmer team. He uses the W3C's Simple
Knowledge Organization System (SKOS), which is a very useful technology
for the expression of ideas in a way natural to humans, but in a very
Web-ready format (RDF) -- together with the Turtle syntax for RDF, which
is easier to read than RDF/XML. The Jacquard methodology requires formal
expression of the core concepts in a way that can be a shared reference
across the various teams... Jacquard (pronounced like "jack-card" with
more emphasis on the second syllable) is a software development methodology
specialized for Web projects, and especially suited for such development
among diverse teams. The Web is in many ways different from any
information platform before it, and this suggests a fresh approach to
development and teamwork. In general it makes sense to look outward to
the Web, and not inward and backward to traditional methodologies, to
find what works. Lightweight, agile process mirrors the basic nature of
the Web, and so does focusing on the data, and how data is organized for
sharing. The specific application or database implementation is not as
important, nor are the tools you choose to use. This mirrors the Web,
which builds on sharing data, and does not require uniformity of
implementations. As such, implementation independence is one of the core
principles of Jacquard. Another principle is support for decentralized
communication. The Web works well across geographical boundaries, and
with the increase of off-shore outsourcing and flexible work arrangements,
it's useful to learn lessons on decentralization and rich communication.
The Web is such a rich information space that some philosophically
consider it a realm of its own which parallels, and sometimes intersects,
our own real world -- the idea of "cyberspace." Paying attention to where
idioms on the Web draw from real-world concepts and phenomena is important
to usability, and so Jacquard's principle of conceptual alignment
encourages you to take care to express the concepts behind your Web
project, and to make that clear expression the foundation for
communication on the project.

Apache POI: Java API To Access Microsoft Format Files

Microsoft has announced a new partnership with Sourcesense, a leading
European open source systems integration consultancy. The two companies
will collaborate on the strategy, development and deployment of open
source solutions for the Microsoft Office product suite. One of the
initial goals of the partnership is contributing to the development of
a new version of Apache POI, a top-level project of the Apache Software
Foundation (ASF). Widely used in financial services and critical
enterprise applications across related sectors, Apache POI is a leading
open source file format reader and writer to create, edit and read
Microsoft Office formats used in Excel, Word, PowerPoint and Visio.
Apache POI is a Java application programming interface (API) used to
access and manage Microsoft Office binary formats, and can be easily
applied to today's billions of binary format documents by alleviating
the need for complex programming and/or reverse engineering. Because
Apache POI libraries are used in numerous open source projects, developing
future libraries to support the Ecma Office Open XML File Formats (the
default file format in the 2007 Microsoft Word, Excel and PowerPoint
products) will play an important role in new interoperability scenarios
where XML-based standard formats will be key for Office documents. Apache
POI support for Open XML is currently in development within the Apache
Software Foundation; its first release is anticipated during the second
quarter of 2008. Code contributions are made by ASF members and committers
(developers authorized to 'commit' or 'write' code, patches or
documentation to the ASF repository), and overseen by the Apache POI
Project Management Committee (PMC). Details are published in the
Microsoft press release "Microsoft and Sourcesense Partner to Contribute
to Open Source, Apache POI to Support Ecma Office Open XML File Formats."

OASIS Open Standards Symposium 2008

OASIS announced that "Composability within SOA" will be the focus of
Open Standards 2008, the fifth annual symposium hosted by OASIS. This
event, which will be held in Santa Clara, California, 28-April-2008
through 1-May-2008, will examine the critical issues faced when
architecting service-oriented applications and the benefits being
reaped by real-world implementations that take advantage of Web services
transactions. Presentations on the Business Process Execution Language
(BPEL), Service Component Architecture (SCA), Service Data Objects (SDO),
WS-Transaction, and related standards will be featured. In an Open
Standards 2008 keynote address, Peter Carbone, Vice President, SOA,
Office of the CTO at Nortel, will share insights on the new realities
presented by communications-enabled applications and the opportunities
they create for standards development, software vendors, and service
providers. OASIS has announced the launch of a new Telecommunications
Services Member Section which will work to bring the full advantages
of SOA to the telecommunications industry. At Open Standards 2008, the
OASIS Open CSA Member Section will host a table-top exhibition showcasing
SCA and SDO supporters, BEA, IBM, Primeton, Rogue Wave, SAP, Software AG,
and Sun Microsystems. Executives from these companies will participate
in a press briefing on the current state of SCA on Tuesday, 29-April-2008.

Protocol for Web Description Resources (POWDER): Grouping of Resources

Members of the W3C Protocol for Web Description Resources (POWDER)
Working Group have published a new Working Draft for Protocol for
"Protocol for Web Description Resources (POWDER): Grouping of Resources."
POWDER facilitates the publication of descriptions of multiple resources
such as all those available from a Web site. These descriptions are
attributable to a named individual, organization or entity that may or
may not be the creator of the described resources. This contrasts with
more usual metadata that typically applies to a single resource, such as
a specific document's title, which is usually provided by its author.
The current document sets out how Description Resources (DRs) can be
created and published, whether individually or as bulk data, how to link
to DRs from other online resources, and, crucially, how DRs may be
authenticated and trusted. The aim is to provide a platform through which
opinions, claims and assertions about online resources can be expressed
by people and exchanged by machines. The draft describes how sets of
IRIs can be defined such that descriptions or other data can be applied
to the resources obtained by dereferencing IRIs that are elements of the
set. IRI sets are defined as XML elements with relatively loose
operational semantics. This is underpinned by the formal semantics of
POWDER which include a semantic extension defined in this document. A
GRDDL transform is associated with the POWDER namespace that maps the
operational to the formal semantics. Changes since the 31-October-2007
working draft are documented in the Change Log.

Sun Metro and .NET WCF Interoperability

The latest interoperability event (a "plugfest") at Microsoft's Redmond
campus showed impressive results for interoperability between future
releases of Sun's Metro Web Services and Windows Communication Foundation
in .NET 3.5. Metro 1.1 FCS is a Web Services framework that provides
tools and infrastructure to develop Web Services solutions for the end
users and middleware developers. With Metro, clients and web services
have a big advantage: the platform independence of the Java programming
language. InfoQ had a chance to talk to Harold Carr, the engineering lead
for enterprise web services interoperability at Sun, about the interop
results. When asked what the relevance of this for Java and .NET developers
would be, Carr highlighted the role of interoperability in general: "Web
services are about wire interoperability, not about the platform they are
implemented in. Therefore, developers, whether using .NET or Java, expect
their services to interoperate. It is relatively straightforward for
platform developers to ensure interoperability for WS-I basic profiles.
But when you add in WS-Policy, WS-Security, WS-Trust, WS-SecureConversation,
WS-ReliableMessaging, etc., the bar for platform implementors gets way
higher. The interop results give transparency into our current development
stage to give people that are planning to use Metro with .NET 3.5
confidence that we will provide an interoperable platform -- rather than
a platform that has only been tested against itself. Reminder: Metro 1.0
already works with .NET 3.0... There are two aspects to consider: the
interop scenarios we test and the deployment of services based on these
specifications. The interop scenarios are very useful, but certainly not
complete -- particularly in reliable messaging. Real deployments will come
up with combinations never tested (either by the interop scenarios tested
at the plugfest or our more extensive in-house testing). Also, .NET 3.0
and Metro 1.0 (both released products) are based on the submission versions
of the WS-* specifications (except for WS-Security, which is standard).
.NET 3.5 (which is released) is based on the standard versions. Metro 1.x,
which will ship later in 2008, will be based on the standard versions
also. All this is a long-winded way to say the standard specs haven't been
used in many deployments based on shipping platforms from different vendors."

US Department of Homeland Security Signs Historic Agreement with EIC

The Emergency Interoperability Consortium (EIC) announced that an
historic agreement between EIC and Department of Homeland Security (DHS)
has been signed to help further the continued development of data sharing
standards for the emergency response community. With the endorsement of
Department of Homeland Security Under-Secretary Admiral Jay Cohen, this
unique relationship, thought to be the first of its kind between DHS and
a non-government entity, strengthens an established alliance between the
organizations to jointly promote the design, development, release, and
use of standards to help solve data sharing problems commonly encountered
during life-saving emergency operations. By working together, both DHS
and the EIC believe that government and industry can more quickly and
cost-effectively bridge the data sharing gap between organizations that
must be able to interoperate in response to the natural and man-made
hazards that form the core of the DHS mission. Numerous federal, state
and local organizations as well as private industry benefit from the
collaborative efforts of the DHS/EIC relationship. Utilization of the
Common Alerting Protocol (CAP) and the Emergency Data Exchange Language
(EDXL) OASIS standards, and several other supporting standards form an
interoperable data sharing communications bridge linking organizations,
government entities and the general public. Los Angeles Fire Department
Battalion Chief Robert Cramer: "By integrating these data technology
capabilities on a platform, we're making it viable to provide data
interoperability among fire, law enforcement, emergency medical services,
Hazmat, and supporting agencies such as county health and transportation.
Creating a common operating picture across multiple agencies and
jurisdictions can reduce response times and save more lives." Specific
objectives of the alliance, as specified in the Memorandum of Agreement,
are to: (1) Improve information sharing capabilities to protect the
nation and its citizens from the consequences of disasters and other
emergencies, regardless of cause; (2) Encourage broad-based participation
in the design, development, acceptance, and use of XML standards to
enable emergency organizations to receive and share data in real time;
(3) Educate federal, state, local, and tribal governments, the media,
citizens, and industry on the meaning and importance of data sharing
within the emergency response communities; (4) Promote innovation in
these communities around open architectures and standards; (5) Foster a
collaborative working environment among federal, state, local, and tribal
jurisdictions on these matters. EIC recommends and assists with the
implementation of technical interoperability standards for emergency
and incident management. The Consortium consists of both public and
private entities to ensure the practical use of open standards. The
EIC has cooperated with DHS, worked with and in its practitioner working
groups to develop detailed requirements for standards, organized
interoperability demonstrations using draft and final standards, and
submitted requirements to OASIS to initiate formal standards development.

Wednesday, March 26, 2008

The Frog Race: The Desire for Control and How Large Companies Interact

I was told recently that of the 250 or so fast-tracked standards that
Ecma has successfully had accepted by National Bodies at ISO/IEC, only
three of them have failed. I thought it would be interesting to read
up a little more on them... Control of the API: ISO standards are a
very scary proposition for large companies. Many of them are not
comfortable with any position other than dominance and stability. The
control of the API is terribly important to them, and they regard loss
of control of the API as a risk (whereas it can be a circuit-breaker
and new-market enabler.) This is one reason why all the large companies
try to favour the member-based boutique standards bodies: W3C, OASIS,
Ecma, because there is more chance that they can establish a beachhead
and make participation at those bodies unattractive or futile for their
competitors. The need for stability is sometimes stronger than the need
for dominance: when you see calls for 'equilibrium' to be maintained
in a market, you know that is a buzzword for maintaining the status quo.
(And it is not always the market leader: it can be a smaller player in
fear of losing their share just as much.) It goes in cycles. The wheel
turns and sooner or later the big companies are forced to deal with ISO
and national bodies, and they find this lack of control very unpleasant.
Sooner or later they find some reason to split back to more dominatable
bodies, and they jump ship. It is not all venal (or even venial) or
negative though: for example, look at SGML: Sun's Jon Bosak (and many
others) were unhappy with the way and speed that SGML maintenance was
proceeding and we went to W3C as a forum for making a simple profile
and addressing a lot of peripheral issues, and XML in turn became the
foundation for the update of SGML. There is always an interplay between
what the boutique, specialist bodies are interested in, and what the
national-body-based regimes such as ISO are interested in: industry
activity is actually really important, because it clarifies what the
ISO groups should be doing. The downside is that when these large,
usually-US-based multinationals hop over to their boutique bodies,
they have to try to justify their jump by slagging off at ISO/IEC.
This is a predictable behaviour: it has happened in the past, it is
happening now, and it will happen in the future. Some parts of the
complaints are often reasonable, some parts are often merely self-serving,
but it is not a new behaviour... More Information

New Release: OpenUDDI Server Version 0.9.7

On behalf of the OpenUDDI (Open Source UDDI) project team, Joakim Recht
has announced the release of OpenUDDI Server Version 0.9.7. The OpenUDDI
project is focused on creating a high performance, easy-to-use UDDI v3
compliant server and client library. The server and client is built
using Java -- version 5 for the server and version 1.4 for the client.
The server uses Hibernate, and supports a wide variety of SQL databases,
as well as LDAP for data storage. The project is built on the Novell
UDDI server but with many new features and optimizations. The primary
contributor is Trifork, sponsored by the Danish National IT and Telecom
Agency. OpenUDDI recently created a new OpenUDDI project site at
SourceForge. Previously, OpenUDDI has been hosted at Softwareborsen,
the Danish government's open source site; however, that site was only in
Danish, and OpenUDDI has attracted interest from other countries; the new
project site which is targeted at a wider audience. Changelog for server
v0.9.7: (1) Tuned category bag heuristics; (2) Improved Hibernate
performance; (3) Always replace Hibernate properties in installer; (4)
Schedule authToken expiration thread periodically; (5) Configure log4j
logging correctly; the default seems to be that 'isTraceEnabled()' returns
true while 'trace()' does nothing. Version 0.9.7 is mainly a maintenance
release with bug and performance fixes.

BPEL4People and WS-HumanTask Get Reference Implementation

BPEL4People and WS-HumanTask (WS-HT), while still specifications in the
OASIS standardization process, can now be used in service-oriented
architecture (SOA) development, said Mike Pellegrini, principal architect
at Active Endpoints Inc. He has incorporated both specifications in this
month's release of his company's visual orchestration systems (VOS)
product, ActiveVOS 5.0, which provides graphic tools for design,
development, testing, deployment and maintenance of SOA applications...
This past week, Pellegrini demonstrated how BPEL4People and WS-HT can be
used in the orchestration of a loan processing application. The demo
showed a business process application where for routine loans a filter
can automate the assessment of whether an applicant is a good or bad risk.
However, when the applicant's credit history is a gray area, a loan
officer must review the application and sign-off on its approval or
denial. That is where BPEL4People and WS-HT come into play. Using those
two specifications, the hand off from the automated process to the loan
officer is tracked by the BPEL-based application, Pellegrini said. As
he showed a view of this process through his visual orchestration tool,
he explained: "It has been routed through the WS-HT specification task
definition. It is routed to a task management system. Now, the system
is just tapping it's fingers waiting for the human to finish." Pellegrini
said this amounted to "a sort of reference implementation for WS-HT
in-box APIs that allows us to get a list of the tasks at hand and the
completed tasks." While the task is not generally a long-running endeavor,
the specifications do allow for that fact that humans aren't usually as
fast at completing tasks as computers are. In the demo, there is
allowance for the task to be saved if the loan officer cannot complete
it in a day, so he can finish it the next day...

Workflow Resource Patterns as a Tool to Support OASIS BPEL4People

OASIS [recently] announced the formation of the WS-BPEL Extension for
People (BPEL4People) Technical Committee... As part of the standardization
process, these proposals are still open to comment in order to ensure
that they meet with general acceptance before being finalized as
standards. However, one of the difficulties with evaluating new
standards initiatives is in finding a suitable conceptual basis against
which their capabilities can be examined and benchmarked. In order to
assist with this activity, this paper proposes the use of the workflow
resource patterns, as a means of evaluating the BPEL4People and
WS-HumanTask proposals. The resource patterns provide a comprehensive
description of the various factors that are relevant to human resource
management and work distribution in business processes. They offer a
means of examining the capabilities of the two proposals from a conceptual
standpoint in a way that is independent of specific technological and
implementation considerations. Through this examination, we hope to
determine where the strengths and weaknesses of these proposals lie and
what opportunities there may be for further improvement. The resource
patterns were developed as part of the Workflow Patterns Initiative,
an ongoing research project that was conceived with the goal of
identifying the core architectural constructs inherent in workflow
technology. The original objective was to delineate the fundamental
requirements that arise during business process modeling on a recurring
basis and describe them in an imperative way. A patterns-based approach
was taken to describe these requirements as it offered both a
language-independent and technology-independent means of expressing
their core characteristics in a form that was sufficiently generic to
allow for its application to a wide variety of offerings. To date, 126
patterns have been identified in the control-flow, data, and resource,
perspectives, and they have been used for a wide variety of purposes,
including evaluation of PAISs, tool selection, process design, education,
and training. The workflow patterns have been enthusiastically received
by both industry practitioners and academics alike. The original
Workflow Patterns paper has been cited by over 150 academic publications,
and the workflow patterns website has been visited more than 100,000
times... We examine the intention and coverage provided by the BPEL4People
and WSHumanTask proposals from various perspectives, starting with their
intention and relationship with related proposals and standards and then
examining their informational and state-based characteristics on a
comparative basis against those described by the workflow resource
patterns... We hope that the observations and recommendations [...] will
assist the OASIS BPEL4People standardization efforts. We are convinced
that an analytical approach based on the workflow/resource patterns can
aid discussions and remove ambiguities...

E-Discovery Guru Not Yet Wed to XML

want XML the dragon slayer: all the functionality of native electronic
evidence coupled with the ease of identification, reliable redaction and
intelligibility of paper documents. The promise is palpable; but for now,
XML is just a clever replacement for load files, those clumsy Sancho
Panzas that serve as squire to addled TIFF image productions. Maybe
that's reason enough to love XML... In e-discovery, we deal with
information piecemeal, such as native documents and system metadata or
e-mail messages and headers. We even deconstruct evidence by imaging it
and stripping it of searchability, only to have to reconstruct the lost
text and produce it with the image. Metadata, header data and searchable
text tend to be produced in containers called load files housing delimited
text, meaning that values in each row of data follow a rigid sequence and
are separated by characters like commas, tabs or quotation marks. Using
load files entails negotiating their organization or agreeing to employ
a structure geared to review software such as CT Summation or Lexis Nexis
Concordance. Conventional load files are unforgiving. Deviate from the
required sequence, or omit, misplace or include an extra delimiter, and
it's a train wreck... There is no standard e-discovery XML schema in wide
use, but consultants George Socha and Tom Gelbmann are promoting one
crafted as part of their groundbreaking Electronic Discovery Reference
Model project. Socha (a member of LTN's Editorial Advisory Board) and
Gelbmann have done an impressive job securing commitments from e-discovery
service providers to adopt EDRM XML as an industry lingua franca... A
mature e-discovery XML schema must incorporate and authenticate native
and nontextual data and ensure that the resulting XML stays valid and
well-formed. It's feasible to encode and incorporate binary formats
using MIME (the same way they travel via e-mail), and to authenticate
by hashing; but these refinements aren't yet a part of the EDRM schema.
So stay tuned. I don't love XML yet, but it promises to be everyone's
new best friend.See also Electronic Discovery Reference Model (EDRM) XML: Click Here

Getting Started with XAML in Silverlight

The popularity of declarative markup languages has gradually increased
since the initial release of HTML. This shouldn't come as a surprise to
anyone given that markup languages let information be presented to end
users without requiring any knowledge of a programming language. For
years HTML has served as the declarative language of choice for presenting
information to end users through a browser and it certainly isn't going
anywhere in the near future. However, new declarative languages such as
Extensible Application Markup Language (XAML) have emerged, providing an
alternate means for displaying data in more rich and engaging ways than
HTML is capable of doing. In this article, I introduce the XAML language
and describe several ways it can be used in Silverlight applications.
The topics covered will focus on functionality available in Silverlight
1.0. Future articles will introduce new XAML features available in
Silverlight 2.0. XAML was originally created for the Windows Presentation
Foundation (WPF) technology released with .NET 3.0. WPF and XAML provide
a way to integrate designers into the application development process
and create rich and interactive desktop (and even Web) applications that
can bind to a variety of data sources. The release of Silverlight 1.0
brought XAML to the world of rich internet application development.
Silverlight exposes a subset of the XAML language found in WPF that can
be run directly in the browser once the Silverlight plug-in has been
installed... Although Silverlight provides a subset of the XAML language
available in WPF, the different declarative elements and attributes
available can accomplish a lot and provide functionality that simply
isn't available in the HTML language. For example, different types of
shapes such as rectangles, ellipses, and lines can be defined and
displayed using XAML. Different types of backgrounds can be defined for
shapes as well including gradients, images, and media clips... Learning
XAML is much like learning HTML; you have to learn the different tag
names and understand how tags can be nested within parent containers.
Once you know the available elements and attributes it's relatively easy
to create a XAML file.

Google, MySpace, Yahoo Forge OpenSocial Foundation

Google, MySpace, and Yahoo announced they have agreed to form a non-profit
group that would govern the development of a standard application
programming interface that developers could use in building software for
supporting online social networks. The three Internet companies expected
the OpenSocial Foundation to launch in 90 days, and asked for others in
the industry to rally behind the OpenSocial API, which was developed by
Google to foster development across emerging social-network development
platforms. MySpace, which accounted for three-quarters of the Web traffic
to social networks in the U.S. in 2007, and its second-place rival Facebook
have been opening up their platforms to third-party developers in an attempt
to add services that may attract advertisers and keep subscribers on the
sites longer. While Facebook is offering its own proprietary tools,
MySpace and other social networks, including Google's Orkut, Hi5, Friendster,
Imeem, LinkedIn, and Plaxo, have agreed to adopt the OpenSocial API, which
connects to Web apps built in JavaScript and HTML. Google, MySpace and
Yahoo have agreed to contribute technology to the OpenSocial Foundation
under a "non assertion covenant," which means they won't seek to enforce
any patents on the intellectual property, representatives told reporters
during a teleconference. All companies joining the foundation would be
expected to contribute technology under the Creative Commons copyright
license. The companies will continue to work together and with the
OpenSocial community to further advance the specification through the new
foundation, as well as an open source reference implementation called
Shindig. Shindig is a new project in the Apache Software Foundation
incubator and is an open source implementation of the OpenSocial
specification and gadgets specification. The architectural components of
Shindig are: (1) Gadget Container JavaScript: core JavaScript foundation
for general gadget functionality; this JavaScript manages security,
communication, UI layout, and feature extensions, such as the OpenSocial
API; (2) Gadget Server: an open source version of Google's gmodules.com,
which is used to render the gadget XML into JavaScript and HTML for the
container to expose via the container JavaScript; (3) OpenSocial Container
JavaScript: JavaScript environment that sits on top of the Gadget
Container JavaScript and provides OpenSocial specific functionality --
profiles, friends, activities, datastore; (4) OpenSocial Gateway Server:
an implementation of the server interface to container-specific
information, including the OpenSocial REST APIs, with clear extension
points so others can connect it to their own backends. See the announcement Click Here

Developing International Standards for Very Small Enterprises

Industry recognizes that very small enterprises (VSEs) contribute
valuable products and services. In Europe, for example, 85 percent of
the IT sector's companies have only one to 10 employees. According to
a recent survey, 78 percent of software development enterprises in the
Montreal area have fewer than 25 employees, while 50 percent have fewer
than 10. Studies and surveys confirm that current software engineering
standards do not address the needs of these organizations, especially
those with a low capability level. Compliance with standards such as
those from ISO and the IEEE is difficult if not impossible for them to
achieve. Subsequently, VSEs have no or very limited ways to be recognized
as enterprises that produce quality software systems in their domain.
Therefore, they are often cut off from some economic activities. To
rectify some of these difficulties, delegates from five national bodies
of the 2004 International Organization for Standardization/International
Electrotechnical Commission Joint Technical Committee 1/Sub Committee
7 (SC7) plenary meeting in Australia reached a consensus regarding the
necessity of providing VSEs with standards adapted to their size and
particular context, including a set of profiles and guides... VSEs
express the need for assistance to adopt and implement standards. More
than 62 percent would like more guidance with examples, and 55 percent
asked for lightweight and easy-to-understand standards, complete with
templates. Finally, the respondents indicated that it must be possible
to implement standards with minimum cost, time, and resources. In 2005,
at the SC7 Plenary meeting in Finland, Thailand proposed the creation
of a new working group to meet these objectives. Twelve countries voted
in favor of establishing such a group, named Working Group 24 (WG24).
WG24 used the concept of ISO profiles (ISP: International Standardized
Profile) to develop the new standard for VSEs. A profile is defined as
'a set of one or more base standards and/or ISPs, and, where applicable,
the identification of chosen classes, conforming subsets, options and
parameters of those base standards, or ISPs necessary to accomplish a
particular function'. [One approach involves production of] guidelines
explaining in more detail the processes outlined in the profile. These
guidelines will be published as ISO technical reports and should be
freely accessible to VSEs. The guidelines integrate a series of deployment
packages that provide a set of artifacts developed to facilitate and
accelerate the implementation of a set of practices for the selected
framework in a VSE. The elements of a typical deployment package include
a process description (tasks, inputs, outputs, and roles), guide, template,
checklist, example, presentation material, mapping to standards and
models, and list of tools to help VSEs implement the process. WG24 plans
to produce a final draft in 2009, with publication by ISO/IEC scheduled
for 2010. In the meantime, the group will make deployment packages freely
available to VSEs. The group also will develop other profiles, covering
different capability levels and application domains, such as finance or
defense.

Web Technologies: SOA What?

Why is the enterprise software industry all abuzz about SOA? The SOA
world has recently begun to realize that SOA applications are ultimately
still just applications. Data services are thus an important class of
services that warrant explicit consideration in designing, building, and
maintaining SOA applications. Those of us who grew up in the "preaSOAic"
era will quickly notice that something is missing from [typical SOA
diagrams]: a data model associated with the application. To use a simple
analogy, services provide operations that are akin to verbs -- the
business actions available to application developers. Missing are the
nouns -- the data entities. By focusing only on business processes and
services, the basic SOA model misses what the actions are about. In
addition, business processes often need access to information. While
some middleware software vendors have been making data service noises
for several years, a survey of current information-integration vendor
offerings reveals an emerging consensus that data services will play a
key role in SOA applications. BEA Systems, Composite Software, IBM,
Microsoft, Red Hat, and Xcalia are among the growing list of companies
seeking to make data services easier to build and maintain with recent
or forthcoming products. In addition to service-enabling data, most such
products include data-integration capabilities that provide uniform,
service-oriented access to otherwise disparate data types and data
sources. Is SOA the next wave or a passing fad? Several signs point to
a lasting future for SOA. A range of organizations and companies are
pursuing SOA initiatives today, and the emerging SaaS trend suggests
that future enterprises' business processes will commonly orchestrate
services residing both in-house and across the Web. And what about data
services -- are they for real? Because data will always be central to
applications, it's likely that data services will "stick" in the SOA
world. Consequently, systems that make building and managing data
services easier will become an increasingly significant piece of the
enterprise information integration puzzle. Moreover, data-service
modeling will become a design discipline in need of sound new
methodologies and supporting tools. See also IEEE Computer Magazine: Click Here

Saturday, March 22, 2008

Opinion: WSO2 Mashup Server Takes First Steps

Mashups (composite applications) promise the ability to easily create
useful new applications from existing services and Web applications.
By combining data from multiple sources across the Web, and from
within the enterprise, mashups can help distill important information
for people who would otherwise need to gather and distill it manually.
Composite applications in 2008 are in the "early adopter" phase, with
companies exploring their uses and potential in the enterprise. There's
no lack of entrants in the field; a quick search turned up at least 20
different mashup platforms, both commercial and open source. Products
such as JackBe Presto, Nexaweb Enterprise Web 2.0 Suite and Kapow's
RoboSuite illustrate the range of approaches. WSO2's Mashup Server is
aimed at Web developers seeking a complete environment for building,
deploying and administering composite applications. It's clear that
the WSO2 Mashup Server design team gave some thought to what such
developers would need to create mashups, and for those with an
understanding of JavaScript, XML, and AJAX, this toolset makes developing
mashups simple... Parsing XML in JavaScript is usually a difficult and
tedious task, but the inclusion of Mozilla's E4X (ECMAScript for XML)
makes parsing XML simpler. JSON (JavaScript Object Notation) would be
a good alternative communication mechanism, and hopefully future versions
will include the option of returning JSON objects as well. Hosted Objects
are objects hosted within the WSO2 Mashup Server that provide access to
remote data sources. These objects are written in Java, and provide
access to APP (Atom Publishing Protocol) resources, RSS feeds, e-mail,
and instant messaging services (although only for sending messages),
among others. One of the more useful if more complicated hosted objects
is the "scraper" object, which makes use of Web-Harvest to screen scrape
Web pages that do not provide Web services. From the enterprise standpoint,
significant omissions are the lack of JMS and SQL hosted objects.
Creating the client side of the mashup is straightforward. Using the
generated JavaScript stubs, you simply need to include them in the Web
page that's consuming the service... Mooshup.com is a community of
mashup authors, where they can develop, share, discover, and run
Javascript-powered mashups. The site is powered by the WSO2 Mashup
Server, which is available as a free open source download.

Demand for Interop Fuels J2EE, Microsoft Unity

This article aims to give developers and architects an armchair tour
of the scope and depth of how J2EE leading vendors are working with
Microsoft to push the availability of next-gen interop technologies
and Best Practices. Last month's JavaOne put J2EE/.NET interop in the
spotlight like never before. Sun and Microsoft technical experts stood
together on a Moscone stage in San Francisco, and debuted co-developed
interop technologies for helping J2EE developers secure traffic between
J2EE and .NET platforms. If JavaOne is any indication, the fences
between J2EE and .NET are definitely coming down. Simon Guest, an interop
specialist and senior program manager on Microsoft's Architecture Strategy
Team, presented at JavaOne. Following Microsoft's Andrew Layman
co-keynote with Sun's Mark Hapner, Guest commented, "we got really good
applause from the audience. A lot of developers came by our booth to tell
us they were glad we were there, which was good to hear" -- the implication
being that Java users and developers are also telling Java vendors it's
OK to work closely with Microsoft on interop. J2EE/.NET interop is
'extremely important' to IBM customers, according to Jeff Jones, IBM's
director of strategy for information management software (IMS): "Customers
tell us that .NET has come more front and center for them, so our focus on
.NET interop has intensified. IBM and Microsoft] now have a jointly staffed
lab in Kirkland, Washington. At that lab, IBM has woven support into DB2
for .NET devs, and made great progress with our ability to interop with
Windows Server 2003 and the upcoming 2005 version... BEA is also
intensifying its interop programs with Microsoft, but their approach is
a bit different than Big Blue's. BEA execs say J2EE/.NET interop will be
key to providing better unified support for .NET and J2EE programming
models, making it easier for developers and architects to program in a
mixed environment. Earlier this spring, BEA introduced its AquaLogic
Service Bus, an abstraction layer designed to sit above Java/J2EE and
.NET environments... For Sun Microsystems there are very compelling
reasons to partner with Microsoft, and work to improve J2EE/.NET interop
tools and approaches. Customers of both companies are demanding
interoperability at all levels, but perhaps most importantly interop must
come with a unified security model. As Sun and Microsoft interop experts
joined together on the JavaONE stage, McNealy demonstrated a new interop
security standard, dubbed Message Transmission Optimization Mechanism
(MTOM). MTOM enables developers to send binary attachments between Java
and .NET using Web Services, while retaining the protections offered by
WS-* security and reliability specs...

OpenLiberty-J Client Library for Liberty Web Services (ID-WSF 2.0)

OpenLiberty.org, the global open source community working to provide
developers with resources and support for building interoperable,
secure and privacy-respecting identity services, has announced the
release of OpenLiberty-J, an open source Liberty Web Services
(ID-WSF 2.0) client library designed to ease the development and
accelerate the deployment of secure, standards-compliant Web 2.0
Applications. OpenLiberty.org will hold a public webcast to review
OpenLiberty-J on April 2, 2008 at 8 am US PT. OpenLiberty-J enables
application developers to quickly and easily incorporate the
enterprise-grade security and privacy capabilities of the proven
interoperable Liberty Alliance Identity Web Services Framework into
identity consuming applications such as those found in enterprise
service oriented architectures (SOAs), Web 2.0 social networking
environments and client-based applications on PC's and mobile devices.
Released as beta today under the Apache 2.0 license, OpenLiberty-J
code is available for review and download at OpenLiberty.org.
OpenLiberty-J is based on J2SE, and open source XML, SAML, and web
services libraries from the Apache Software Foundation and Internet2,
including OpenSAML, a product of the Internet2 Shibboleth project. The
library implements the Liberty Advanced Client functionality of Liberty
Web Services standards. Developers can immediately begin using the
OpenLiberty-J code to build a wide range of new identity applications
that are secure and offer users a high degree of online privacy
protection. "With the release of OpenLiberty-J, developers now have a
comprehensive library of open source code to begin driving security
and privacy into applications requiring identity management
functionality," said Conor P. Cahill, Principal Engineer, Intel and
OpenLiberty-J contributor. "OpenLiberty.org encourages the global open
source community to begin working with the code and welcomes contributions
to further the evolution of OpenLiberty-J as the project moves from
beta to general availability later this year."

An Extensible Markup Language (XML) Configuration Access Protocol

This document describes an "xcap-diff" SIP event package, with the aid
of which clients can receive notifications of the partial changes of
Extensible Markup Language (XML) Configuration Access Protocol (XCAP)
resources. The initial synchronization and document updates are based
on using the XCAP-Diff format. XCAP (RFC 4825) is a protocol that
allows clients to manipulate XML documents stored on a server. These
XML documents serve as configuration information for application
protocols. As an example, RFC 4662 resource list subscriptions (also
known as presence lists) allow a client to have a single SIP subscription
to a list of users, where the list is maintained on a server. The server
will obtain presence for those users and report it back to the client.
Another specification, "Extensible Markup Language (XML) Document Format
for Indicating a Change in XML Configuration Access Protocol (XCAP)
Resources" defines a data format which can convey the fact that an XML
document managed by XCAP has changed. This data format is an XML document
format, called an XCAP diff document. This format can indicate that a
document has changed, and provide its previous and new entity tags. It
can also optionally include a set of patch operations which indicate
how to transform the document from the version prior to the change, to
the version after it. As defined in this XCAP Diff Event Package memo,
an "XCAP Component" is an XML element or an attribute, which can be
updated or retrieved with the XCAP protocol. "Aggregating" means that
while XCAP clients update only a single XCAP component at a time, several
of these modifications can be aggregated together with the XML-Patch-Ops
semantics. When a client starts an "xcap-diff" subscription it may not
be aware of all the individual XCAP documents it is subscribing to.
This can, for instance happen when a user subscribes to his/her
collection of a given XCAP Application Usage where several different
clients update the same XCAP documents. The initial notification can
give the list of these documents which the authenticated user is allowed
to read. The references and the strong ETag values of these documents
are shown so that a client can separately fetch the actual document
contents with the HTTP protocol. After these document retrievals, the
subsequent SIP notifications can contain patches to these documents by
using XML-Patch-Ops semantics. While the initial document synchronization
is based on separate HTTP retrievals of full documents, XML elements
or attributes can be received "in-band", that is straight within the
'xcap-diff' notification format.

Quark Delves Into Publishing Workflow

Publishing software company Quark has introduced new software poised to
help tame increasingly unwieldy publishing production routines. Quark
announced the release of DPS earlier this month, at the AIIM
International Exposition and Conference in Boston. The newly released
Quark Dynamic Publishing Solution sets out to confront a growing
problem experienced by organizations that publish a lot of material --
that of keeping track of the material as it is used across different
media... Design publication tools such as Quark's QuarkXPress and
Adobe's InDesign have been ill-suited to reformat designed material
for the Web, so the process of moving printed material to the Web
tends to be a time-consuming and sometimes still manual process.
According to the product description: "Quark Dynamic Publishing
Solution Quark DPS consists of multiple software components, including
desktop tools for creating content, and server-based technology for
automating publishing workflows. It is based on open standards to
allow for easy integration with enterprise content management systems
and other business applications. Dynamic publishing automates the
creation and delivery of information across multiple channels, from
print to Web, email and beyond. It allows users to create reusable
components of information that can be combined to create various types
of documents for any audience. Dynamic publishing automates the page
formatting process allowing for the production of print, Web, and
electronic content from a single source of information. Quark uses XML
(Extensible Markup Language) as the underlying data format for your
information because its capabilities line up perfectly with dynamic
publishing's requirements. XML lets you break down your information
into components of any size that may be useful. For example, an
article might include a title, subtitle, and body copy, which itself
might consist of a number of components such as paragraphs. Some of
those components may be reused across multiple articles or documents,
thereby enabling you to create a single source where one change can
update many documents. In addition, XML enforces the absolutely
consistent structure that makes automation possible. Without this
consistency, the only option would be to continue the labor-intensive
effort of hand-crafting pages indefinitely. XML allows information to
exist independently of its formatting. By applying formatting separately,
through an automated process, XML-based information can easily be
published in multiple formats and multiple types of media..."

W3C Last Call Working Draft: Cool URIs for the Semantic Web

W3C announced that members of the Semantic Web Education and Outreach
(SWEO) Interest Group have published the Last Call Working Draft for
"Cool URIs for the Semantic Web." The document is intended to become
a W3C Interest Group Note giving a tutorial explaining decisions of
the TAG for newcomers to Semantic Web technologies. It was initially
based on the DFKI Technical Memo TM-07-01 and was subsequently published
as a W3C Working Draft in December 2007; it was reviewed by the Technical
Architecture Group (TAG) and the Semantic Web Deployment Group (SWD).
The document is a practical guide for implementers of the RDF
specification. The title is inspired by Tim Berners-Lee's article "Cool
URIs don't change". It explains two approaches for RDF data hosted on
HTTP servers. Intended audiences are Web and ontology developers who
have to decide how to model their RDF URIs for use with HTTP. Applications
using non-HTTP URIs are not covered. The document is an informative guide
covering selected aspects of previously published, detailed technical
specifications. The Resource Description Framework (RDF) allows users
to describe both Web documents and concepts from the real world -- people,
organisations, topics, things -- in a computer-processable way. Publishing
such descriptions on the Web creates the Semantic Web. URIs (Uniform
Resource Identifiers) are very important, providing both the core of
the framework itself and the link between RDF and the Web. This document
presents guidelines for their effective use. It discusses two strategies,
called 303 URIs and hash URIs. It gives pointers to several Web sites
that use these solutions, and briefly discusses why several other
proposals have problems. It is important to understand that using URIs,
it is possible to identify both a thing (which exists outside of the web)
and a web document describing the thing. For example the person Alice
is described on her homepage. Bob may not like the look of the homepage,
but fancy the person Alice. So two URIs are needed, one for Alice, one
for the homepage or a RDF document describing Alice. The question is
where to draw the line between the case where either is possible and the
case where only descriptions are available. According to W3C guidelines
in "Architecture of the World Wide Web, Volume One," we have an Web
document (there called information resource) if all its essential
characteristics can be conveyed in a message. Examples are a Web page,
an image or a product catalog. The URI identifies both the entity and
indirectly the message that conveys the characteristics. In HTTP, a
status 200 response code should be sent when a Web document has been
accessed, a different setup is needed when publishing URIs that are
meant to identify entities.

Google Sees Surge in Web Use on Hot Mobile Phones

Google has seen an acceleration of Internet activity among mobile phone
users in recent months since the company introduced faster Web services
on selected phone models, fueling confidence the mobile Internet era is
at hand, the company said on Tuesday. Early evidence showing sharp
increases in Internet usage on phones, not just computers, has emerged
from services Google has begun offering in recent months on Blackberry
e-mail phones, Nokia devices for multimedia picture and video creators
and business professionals and the Apple iPhone... The growing
availability of flat-rate data plans from phone carriers instead of
per-minute charges that previously discouraged Internet use, along with
improved Web browsers on mobile phones as well as better-designed services
from companies like Google are fueling the growth. Google made the
pronouncement as it introduced a new software download for mobile phones
running Microsoft's Windows Mobile software that conveniently positions
a Google Web search window on the home screen of such phones. Similar
versions of the search software which Google introduced for BlackBerry
users in December and certain Nokia phones in February have sped up the
time users take to perform Web searches by 40 percent and, in turn, driven
usage. The software shortcuts the time it takes for people to perform Web
searches on Google by eliminating initial search steps of finding a Web
browser on the phone, opening the browser, waiting for network access,
and getting to Google.com. By making a Google search box more convenient,
mobile phone users have begun using the Internet more. Microsoft expects
to have sold 20 million Windows Mobile devices by the end of its fiscal
year in June, which together with Blackberry and Symbian-based phones
represent upward of 85 percent of the Internet-ready smartphones sold
in the world.

First Look: Safari 3.1 Adds Speed and HTML 5 Features

Now available also for Windows: Safari 3.1 is "The fastest web browser
on any platform, Safari loads pages up to 1.9 times faster than Internet
Explorer 7 and up to 1.7 times faster than Firefox 2... it executes
JavaScript up to 6 times faster than Internet Explorer 7 and up to 4
times faster than Firefox 2." Apple released Safari 3.1 on March 18,
2008 with an updated rendering engine that makes the fastest Internet
browser even faster. On top of that, Apple's new browser includes some
features that reflect the future of the HTML 5 specification: offline
storage, media support, CSS animations, and Web fonts. Under the hood
Apple has made some significant changes that it has pulled from the
latest builds of the open-source WebKit engine. WebKit is the framework
version of the engine that's used by Safari. It is also the basis of the
Web browsing engine in iPhone's Mobile Safari, Symbian's browser, the
Google Android platform, and Adobe's new AIR platform. To check out how
well Safari 3.1 handles Web sites, I ran it through some popular standards
testing -- and found that it leads the pack. In the Acid3 Tests, which
were created by the Web Standards Project to test dynamic browser
capabilities, Safari 3.1 scored 75 out of 100, significantly higher than
the previous version of Safari and other shipping browsers (Firefox 3
Beta 4 scored 68, while the most recent WebKit scored 92). However, the
big news is how fast the new version of Safari is... One of the drawbacks
of Safari has been the perceived "over-smoothing" or softening of fonts
on the PC. While this hasn't been completely fixed, Apple's Safari 3.1
allows Web sites to specify fonts outside the seven Web-safe font families;
these new fonts can be downloaded by the browser as needed. Unfortunately,
there are still prominent features that are part of rival browsers that
Safari simply can't match. For example, Safari doesn't have all of the
add-ons that Firefox enjoys, such as the Google toolbar... With the 3.1
release, Safari has become the fastest browser you can use. If that isn't
enough reason to make a switch, its strong adherence to Web standards and
rapid adoption of new technologies might make you think again.

OAI4J Open Source Client Library Supports OAI Metadata Specifications

Software developers at the National Library of Sweden have announced the
release of OAI4J, a client library for OAI-PMH and OAI-ORE written in
Java, as Open Source. The project is hosted on SourceForge. OAI-PMH
(The Open Archives Initiative Protocol for Metadata Harvesting) provides
an application-independent interoperability framework based on metadata
harvesting, where a "record" is returned in an XML-encoded byte stream
in response to an OAI-PMH request for metadata from an item. Object Reuse
and Exchange (OAI-ORE) specifications allow distributed repositories to
exchange information about their constituent digital objects. These
specifications include approaches for representing digital objects and
repository services that facilitate access and ingest of these
representations. OAI4J has been released as Open Source under the Apache
License, version 2.0. Features for OAI-PMH: (1) convenient Java API that
lets you perform queries and handle the harvested data in an
object-oriented fashion; (2) it handles all the verbs and responses of
OAI-PMH. The OAI-ORE support: (3) lets you create and build Resource
Maps from scratch programmatically; (4) handles parsing of existing
Resource Maps written as Atom feeds; (5) allows Resource Map objects to
be serialized into Atom feeds; (6) can be extended to also handle RDF/XML
parsing/serialization. The SourceForge download site provides links for
a binary download, Java documentation for the API, and some examples to
get you started.

Microsoft Working with Eclipse on Vista, ID Links

Microsoft's much-anticipated revelations about collaborations with the
Eclipse Foundation Wednesday did not include joining the open-source
tools foundation. But the two organizations are working together to
enable use of Eclipse technology to build Java applications for Windows
Vista. Also, Microsoft and Eclipse are collaborating on identity
management via linking Eclipse's Higgins Project with Microsoft's
CardSpace technology. Microsoft's efforts were detailed by Sam Ramji,
Microsoft director of platform technology strategy, at the EclipseCon
2008 conference in Santa Clara, California. Ramji guided the audience
through a list of efforts Microsoft has made in the open-source world,
such as accommodations for PHP (Hypertext Preprocessor), JBoss, and
Novell's Xen hypervisor. Ramji also said Microsoft itself has 200 projects
hosted on its CodePlex open-source hosting site. Microsoft traditionally
has been viewed as the anti-open-source company, but Ramji spared no
detail looking to refute this notion, listing a myriad of projects
undertaken over the years: "Today, we're architecting our participation
in the open-source world." The Java enablement effort for Vista involves
collaboration on an SWT (Standard Widget Toolkit) to work with Microsoft's
WPF (Windows Presentation Foundation) technology for graphical presentation.
This will enable Java to be used an authoring language to write
WPF-enabled applications. Ramji wrote in his blog: "... the CardSpace
team at Microsoft was already working actively with the Higgins Project
to establish a secure, interoperable framework for user identity on the
web -- an architecture known as the Identity Metasystem. Since the
inception of Higgins, the CardSpace team has worked very closely with
the Higgins team, providing them the protocol documentation they needed
to be able to build an identity selector that is interoperable with
CardSpace, as well as placing those protocol specifications under the
OSP so that they knew that it was safe to do so. We share a commitment
to building a user-centric, privacy-preserving, secure, easy-to-use
identity layer for the Internet..."

Addressing Doubts about REST

Invariably, learning about REST means that you'll end up wondering just
how applicable the concept really is for your specific scenario. And
given that you're probably used to entirely different architectural
approaches, it's only natural that you start doubting whether REST, or
rather RESTful HTTP, really works in practice, or simply breaks down
once you go beyond introductory, 'Hello, World'-level stuff. In this
article, I will try to address ten of the most common doubts people have
about REST when they start exploring it, especially if they have a strong
background in the architectural approach behind SOAP/WSDL-based Web
services. (1) REST may be usable for CRUD, but not for 'real' business
logic. (2) There is no formal contract/no description language. (3) Who
would actually want to expose so much of their application's
implementation internals? (4) REST works with HTTP only, it's not
transport protocol independent. (5) There is no practical, clear and
consistent guidance on how to design RESTful applications. (6) REST does
not support transactions. (7) REST is unreliable. (8) No pub/sub support:
REST is fundamentally based on a client-server model, and HTTP always
refers to a client and a server as the endpoints of communication. A
client interacts with a server by sending requests and receiving
responses. In a pub/sub model, an interested party subscribes to a
particular category of information and gets notified each time something
new appears. How could pub/sub be supported in a RESTful HTTP
environment? We don't have to look far to see a perfect example of
this: it's called syndication, and RSS and Atom Syndication are examples
of it. A client queries for new information by issuing an HTTP against
a resource that represents the collection of changes, e.g. for a
particular category or time interval. This would be extremely inefficient,
but isn't, because GET is the most optimized operation on the Web. In
fact, you can easily imagine that a popular weblog server would have
scale up much more if it had to actively notify each subscribed client
individually about each change. Notification by polling scales extremely
well... (9) No asynchronous interactions. (10) Lack of tools: -- vendors
are coming up with more and more (supposedly) easier and better support
for RESTful HTTP development in their frameworks, e.g. Sun with JAX-RS
(JSR 311) or Microsoft with the REST support in .NET 3.5 or the ADO.NET
Data Services Framework... Is REST, and its most common implementation,
HTTP, perfect? Of course not. Nothing is perfect, definitely not for
every scenario, and most of the time not even for a single scenario.
I've completely ignored a number of very reasonable problem areas that
require more complicated answers, for example message-based security,
partial updates and batch processing, and I solemnly promise to address
these in a future installment.

XML Base (Second Edition) Issued as a Proposed Edited Recommendation

Members of the W3C XML Core Working Group have published a Proposed
Edited Recommendation for "XML Base (Second Edition)." The document
describes a facility, similar to that of HTML BASE, for defining base
URIs for parts of XML documents. As a Proposed Edited Recommendation
(PER), this second edition is not a new version of XML Base: its
purpose is to clarify a number of issues that have become apparent
since the first edition was published. Some of these were first published
as separate errata, others were published in a public editor's draft in
November 2006, and in December 2006. BASE allows authors to explicitly
specify a document's base URI for the purpose of resolving relative
URIs in links to external images, applets, form-processing programs,
style sheets, and so on. The document describes a mechanism for
providing base URI services to XLink, but as a modular specification
so that other XML applications benefiting from additional control over
relative URIs but not built upon XLink can also make use of it. The
syntax consists of a single XML attribute named 'xml:base'. The
specification does not give the 'xml:base' attribute any special status
as far as XML validity is concerned. In a valid document the attribute
must be declared in the DTD, and similar considerations apply to other
schema languages. The deployment of XML Base is through normative
reference by new specifications, for example XLink and the XML Infoset.
Applications and specifications built upon these new technologies will
natively support XML Base. The behavior of 'xml:base' attributes in
applications based on specifications that do not have direct or indirect
normative reference to XML Base is undefined. It is expected that a
future RFC for XML Media Types will specify XML Base as the mechanism
for establishing base URIs in the media types it defines. A companion
document "Testing XML Base Conformance" from the W3C XML Core Working
Group is also available. While "XML Base" does not specify an interface
for determining the base URI of a node in an XML document, various other
specifications directly or indirectly refer normatively to XML Base,
and provide mechanisms by which the results of XML Base processing can
be determined. Some of these specifications have test suites that
include XML Base tests.

Liberty Alliance Web Services Framework: A Technical Overview

This overview document enumerates the major features of Liberty Web
Services, a framework for identity-based services that provides added
value for identity, security, and privacy above and beyond basic web
services, and thereby makes identity data portable across domains. The
term Liberty Web Services comprises the Identity Web Services Framework
(ID-WSF) and the Identity Service Interface Specifications (ID-SIS) that
take advantage of that framework. Together, these two pieces enable
identity-based services -- web services associated with the identity
attributes of individual users. Why are identity-based services valuable?
Fundamentally, because they enable a user's identity data to be portable
across the many Web applications that, if able to access these attributes,
can provide a more customized and meaningful experience to the user,
whilst removing from that user the burden of manually repeatedly
providing and managing their identity attributes at each. ID-WSF builds
on many existing standards for networking and distributed computing, and
adds specialized capabilities for handling identity-related information
and tasks and for ensuring privacy and security. With ID-WSF providing
the addressing, security and privacy plumbing -- different ID-SIS
specifications define the specific syntax and semantics for sharing
different slices of your identity attributes. For instance, a Calendar
SIS specifies how the travel service would query the user's Calendar
Service for free blocks, or write an event. Other ID-SIS specifications
either already exist or can be defined for other aspects of your
identity, e.g., The user's personal profile, geolocation, presence, or
wallet... An identity-based service is a web service associated with a
particular user, i.e., a web service at which a user's calendar
information can be accessed. Identity-based services require
functionality beyond that necessary for basic web services not associated
with a given user -- particularly in the areas of identity, security, and
privacy. Liberty ID-WSF specifications define the addressing, security
and privacy plumbing -- and different Liberty ID-SIS specifications define
the specific syntax and semantics for sharing different slices of identity
attributes. Together, ID-WSF and ID-SIS make identity data portable in a
secure and privacy-respecting manner.

Tuesday, March 18, 2008

Web Creator Rejects Net Tracking

The creator of the Web has said consumers need to be protected against
systems which can track their activity on the internet. Sir Tim
Berners-Lee told BBC News he would change his internet provider if it
introduced such a system. Plans by leading internet providers to use
Phorm, a company which tracks web activity to create personalised adverts,
have sparked controversy. Sir Tim said he did not want his ISP to track
which websites he visited. "I want to know if I look up a whole lot of
books about some form of cancer that that's not going to get to my
insurance company and I'm going to find my insurance premium is going
to go up by 5% because they've figured I'm looking at those books," he
said. Sir Tim said his data and web history belonged to him... Phorm
has said its system offers security benefits which will warn users about
potential phishing sites -- websites which attempt to con users into
handing over personal data. The advertising system created by Phorm
highlights a growing trend for online advertising tools - using personal
data and web habits to target advertising. Social network Facebook was
widely criticised when it attempted to introduce an ad system, called
Beacon, which leveraged people's habits on and off the site in order to
provide personal ads... According to "The Register" ("Gov advisors:
Phorm is illegal"), "The Foundation for Information Policy Research
(FIPR), a leading government advisory group on internet issues, has
written to the Information Commissioner arguing that Phorm's ad targeting
system is illegal. In an open letter posted to the think tank's website
today, the group echoes concerns voiced by London School of Economics
professor Peter Sommer that Phorm's planned partnerships with BT, Virgin
Media and Carphone Warehouse are illegal und the Regulation of
Investigatory Powers Act 2000 (RIPA). The letter, signed by FIPR's top
lawyer Nicholas Bohm, states: 'The explicit consent of a properly-informed
user is necessary but not sufficient to make interception lawful'...
Bohm uses the letter to urge the Information Commissioner, Richard Thomas,
to ignore the conclusions of the Home Office, which advised BT and the
other ISPs that Phorm's technology is legal."

HTTP Header Linking

This memo clarifies the status of the Link HTTP header and attempts to
consolidate link relations in a single registry. A means of indicating
the relationships between documents on the Web has been available for
some time in HTML, and was considered as a HTTP header in RFC 2068, but
removed from RFC 2616, due to a lack of implementation experience. There
have since surfaced many cases where a means of including this information
in HTTP headers has proved useful. However, because it was removed, the
status of the Link header is unclear, leading some to consider minting
new application-specific HTTP headers instead of reusing it. This
document seeks to address these shortcomings. Additionally, formats
other than HTML -- namely, Atom (RFC 4287) -- have also defined generic
linking mechanisms that are similar to those in HTML, but not identical.
This document aims to reconcile these differences when such links are
expressed as headers. This straw-man draft is intended to give a rough
idea of what it would take to align and consolidate the HTML and Atom
link relations into a single registry with reasonable extensibility
rules. In particular: (a) it changes the registry for Atom link relations,
and the process for registration; (b) it assigns more generic semantics
to several existing link relations, both Atom and HTML; (c) it changes
the syntax of the Link header -- in the case where extensions are present.
The Link entity-header field provides a means for describing a relationship
between two resources, generally between that of the entity associated
with the header and some other resource. An entity may include multiple
Link values. The Link header field is semantically equivalent to the
'link' element in HTML, as well as the 'atom:link' element in Atom. The
title parameter may be used to label the destination of a link such
that it can be used as identification within a human-readable menu...
Link Relation Registry: This specification is intended to update Atom
s a way of indicating the semantics of a link. Link relations are not
format-specific, and must not specify a particular format or media type
that they are to be used with. The security considerations of following
a particular link are not determined by the link's relation type; they
are determined by the specific context of the use and the media type of
the response. Likewise, a link relation should not specify what the
context of its use is, although the media type of the dereferenced link
may constrain how it is applied. New relations may be registered,
subject to IESG Approval, as outlined in RFC 2434.

OpenAjax Adds Security, Mobility To Web 2.0 Apps

The OpenAjax Alliance updated its publish/subscribe platform, and
unveiled Mobile Ajax for mobile devices at the AjaxWorld conference in
New York. Ajax (Asynchronous JavaScript and XML) powers most Web 2.0
applications, including mashups, as well as gadgets, which can be
placed on Web pages or social networking sites to show weather, incoming
mail or other highly customized content. But like any other code,
mashups and gadgets are vulnerable to malicious attacks. Ajax-based
content also hasn't penetrated the mobile world much, the alliance said.
The alliance's newly revamped framework, OpenAjax Hub 1.1, extends the
publish/subscribe features and allows incorporation of untrusted mashup
components, known as widgets, from third parties. Using IBM's Smash
technology, untrusted widgets are isolated into IFrames and can only
communicate with the rest of the mashup through a secure, mediated message
bus. Later, the alliance expects to issue standard API for OpenAjax Hub
1.1, along with a commercial-ready open source JavaScript reference
implementation, the group said, in a statement. Mobile Ajax, its other
initiative, is intended to broaden use of Ajax on mobile phones. Many
Ajax-powered mobile applications require integration with the phone's
operating system for physical location or for one-touch dialing, for
example. To address the OS integration requirement, the Mobile Ajax
committee will establish use cases, requirements, and characterize the
requirements of the security effort, with likely follow-on efforts to
pursue industry standards and/or open source. According to the
announcement, "The Ajax industry today has dozens of useful Ajax
libraries and several popular developer tools, but integration of Ajax
libraries into Ajax tools has been a largely library-by-library manual
process for the tool vendors. In addition to its mashup features,
OpenAjax Metadata also defines a comprehensive industry XML standard
for describing Ajax library APIs and UI controls, with the objective
to allow arbitrary Ajax tools to integrate with arbitrary Ajax libraries.
Among the participants on the IDE committee are representatives from
Adobe, Aptana, Dojo, Eclipse, IBM, Microsoft, Sun, TIBCO and Zend."

Sun Unveils NetBeans 6.1 Beta

Not to be outdone by the confab for Eclipse developers at EclipseCon,
Sun Microsystems is announcing the beta of its NetBeans 6.1 integrated
development environment. Sun officials are announcing the availability
of NetBeans 6.1, which delivers a set of features for JavaScript
development, a key component for delivering AJAX Web applications and
tighter integration of MySQL database functionality. NetBeans is Sun's
open-source Java development tool set. Sun officials said the new
JavaScript support includes semantic highlighting, code completion,
type analysis, quick fixes, semantic checks and refactoring. In addition,
the NetBeans 6.1 beta has a browser compatibility feature that enables
developers to write JavaScript code that works in the Mozilla Firefox,
Internet Explorer, Opera and Safari Web browsers. Jim Parkinson, vice
president of tools and services at Sun, said that, since the release of
NetBeans 4.0, momentum and adoption have been strong, with more than
3.2 million downloads coming over the past two years: "With NetBeans 6.1,
we expect adoption rates to continue to go through the roof, especially
with the new JavaScript functionality and the tighter integration work
taking place with the MySQL database." According to the release notes,
NetBeans IDE 6.1 is a significant update to NetBeans IDE 6.0 and includes
the following changes: (1) A new window system that includes transparency;
(2) Sharability of projects: this new feature in default Java, Web and
all J2EE project types allows to create projects that share definitions
of libraries; that in turn allows to create self-contained projects or
set of projects that can be built from the command line, on continuous
integration servers and by users of other IDEs without problems; (3)
Javadoc and sources association: now any JAR item on the project classpath
can be associated with its Javadoc and sources too; (4) JSF CRUD Generator:
with this feature, you can generate a JavaServer Faces CRUD application
from JPA entity classes; (5) New MySQL support in Database Explorer:
this feature allows you to register a MySQL Server, view databases, view,
create, and delete databases, easily create and open connections to
these databases, and to launch the administration tool for MySQL. This
also allows you to easily create NetBeans sample databases so that
following tutorials, blogs, and so on is significantly easier. (6)
Inspect Members and Hierarchy Widnows: Inspect Members and Hierarchy
actions now work when the caret in the Java Editor is on a Java class
for which there is no source available. (7) Support for Java Beans: you
can now view Java Bean patterns in the Navigator and BeanInfo Editor.
(8) Javadoc Code Completion: editing of javadoc comments is more convenient
with code completion. (9) JavaScript support; (10) Spring Framework
Support; (11) On Demand Binding Attribute for Visual Web JSF projects;
(12) Axis2 support for web services; (13) SOAP UI integration for Web
Service testing and monitoring.

Eclipse at eBay, Part 1: Tailoring Eclipse to the eBay Architecture

In this article the author explains how eBay uses Eclipse and custom
plug-ins to build the next generation of the giant auction Web site.
Eclipse's first claim to fame was as an integrated development environment
(IDE) for Java technology. Eclipse's plug-in architecture is a big reason
for its success. There are many popular plug-ins available, and it is
very easy to create your own. These two traits make Eclipse a perfect
fit for systems with specialized architectures, such as eBay. Eclipse
is known for being a Java IDE with a great plug-in system. The Eclipse
V3.3 (Europa) release of Eclipse brought with it several specialized
distributions of Eclipse. These included Eclipse for Java developers
and Eclipse for Java EE developers. In addition, you can use the Eclipse
C/C++ Development Toolkit (CDT) and the Eclipse PHP Development Toolkit
(PDT). eBay was originally launched as AuctionWeb in 1995. The original
site was written in Perl. As the site grew, it was rewritten with a C++
back end and a front end that made use of XSL. Using XSL to generate
HTML was very cutting-edge back in the late 1990s. eBay went public in
1998 and continues to see exponential growth. Constantly mounting
pressure from traffic forced a massive rewrite of the back end of eBay
in the Java programming language starting in 2001. The front-end
architecture was not changed. The Java+XSL architecture is internally
referred to as the V3 architecture, with Perl being V1 and C++/XSL as
V2. The V3 architecture proved to be massively scalable, allowing eBay
to grow to its current size as one of the world's most visited sites.

Eclipse to Stress Component, Runtime Efforts

The Eclipse Foundation announced that is will branch out in the realm of
component-oriented software development, unveiling an umbrella project
unifying several runtime initiatives. Eclipse's component development
plan, called CODA (Component Oriented Development and Assembly), hinges
on Eclipse's Equinox, which is the foundation's OSGi-based runtime and
a part of the new Eclipse Runtime (RT) project. CODA is a methodology on
how to build and deploy applications. Equinox is runtime platform software
focused on Java and supporting the concepts of CODA. Eclipse RT serves
as a top-level project to house runtime efforts in the Eclipse community.
Featured will be six sub-projects, including: Equinox; Eclipse Communication
Framework for development of distributed tools and applications;
EclipseLink, providing object-relational persistence services; and Rich
AJAX Platform for building AJAX applications. The other two subprojects
include Swordfish, offering an SOA framework, and Riena for building an
enterprise desktop with capabilities like the ability to access transaction
and database systems. Also on tap at the conference is the introduction of
an Equinox community portal to educate developers on Equinox, OSGi, and
Eclipse runtime projects. OSGi has served as the basis for the Eclipse
plug-in model, in which the Eclipse IDE is extended via plug-ins offering
different capabilities. Equinox and CODA provide advantages in
component-oriented development because Equinox is based on OSGi, a
component model spanning platforms and architectural tiers. OSGi also
can be used in mobile and embedded devices and desktop and server
applications; other component models tend to be operating system-specific
or tied to a specific deployment tier. Developers using Equinox can
assemble and customize the application and runtime platform; also, a
standard integration mechanism is provided to link to partner and
customer solutions.

Speech Synthesis Markup Language (SSML) Version 1.1

W3C announced that the Voice Browser Working Group has published an
updated Working Draft for the "Speech Synthesis Markup Language (SSML)
Version 1.1" specification, part of the W3C framework for enabling access
to the Web using spoken interaction. Appendix G documents the
specification changes since SSML Version 1.0; a colored diff-marked
version is also available for comparison purposes. Please send comments
by 17-April-2008. This document enhances SSML 1.0 to provide better
support for a broader set of natural (human) languages. To determine in
what ways, if any, SSML is limited by its design with respect to
supporting languages that are in large commercial or emerging markets
for speech synthesis technologies but for which there was limited or no
participation by either native speakers or experts during the development
of SSML 1.0, the W3C held three workshops on the Internationalization of
SSML. The first workshop, in Beijing, PRC, in October 2005, focused
primarily on Chinese, Korean, and Japanese languages, and the second
workshop, in Crete, Greece, in May 2006, focused primarily on Arabic,
Indian, and Eastern European languages. The third workshop, in Hyderabad,
India, in January 2007, focused heavily on Indian and Middle Eastern
languages. Information collected during these workshops was used to
develop a requirements document. Changes from SSML 1.0 are motivated by
these requirements. SSML provides a rich, XML-based markup language for
assisting the generation of synthetic speech in Web and other applications.
It provides a standard way to control aspects of speech such as
pronunciation, volume, pitch, rate, etc. across different
synthesis-capable platforms. SSML is part of a larger set of markup
specifications for voice browsers developed through the open processes
of the W3C. A related initiative to establish a standard system for
marking up text input is SABLE, which tried to integrate many different
XML-based markups for speech synthesis into a new one. The intended use
of SSML is to improve the quality of synthesized content. Different
markup elements impact different stages of the synthesis process. The
markup may be produced either automatically, for instance via XSLT or
CSS3 from an XHTML document, or by human authoring. Markup may be present
within a complete SSML document or as part of a fragment embedded in
another language, although no interactions with other languages are
specified as part of SSML itself. Most of the markup included in SSML
is suitable for use by the majority of content developers.

Yahoo Search Takes Aim at Semantic Web

Yahoo Inc. announced that it will support various Semantic Web standards
in its new Search Open Platform, the latest move by the company to
embrace the emerging Web framework. The company also disclosed more
details about its plan to open its search engine to third party developers.
Yahoo said that its support of standards like microformats and RDF
(Resource Description Framework), are aimed providing users with better
search results by improving the understanding of content and the
relationships between content. For example, the new Web standards would
ensure the inclusion of pertinent data, such as a person's name, location,
current job specialties, number of contacts and a link to get introduced
to that person, to a LinkedIn profile found via Yahoo Search, the company
noted. "With a richer understanding of LinkedIn's structured data
included in our index, we will be able to present users with more
compelling and useful search results for their site," noted Amit Kumar,
director of product management for Yahoo Search, in a blog post. Kumar:
"While there has been remarkable progress made toward understanding the
semantics of web content, the benefits of a data Web have not reached
the mainstream consumer. Without a killer semantic Web app for consumers,
site owners have been reluctant to support standards like RDF, or even
microformats. We believe that app can be Web search." Yahoo also
announced that it will launch a beta tool to let third parties add data
to Yahoo Search results within several weeks. Using this tool a restaurant,
for example, could add reviews or other data to Yahoo Search results for
queries about the eatery. Developers can build enhanced results
applications by accessing structured data that Yahoo will make available
through public APIs and in its index. The structured data is available
to Web site owners through feeds or the supported semantic Web standards.

The Australian METS Profile: A Journey about Metadata

In December 2007 the National Library of Australia registered an
Australian METS Profile with the Library of Congress. This profile
describes the rules and requirements for using the Metadata Exchange and
Transmission Standard (METS) to support the collection of and access to
content in Australian digital repositories. METS is a framework standard
that enables metadata describing an object and its structure to be
recorded in a document that can be used as a Submission Information
Package (SIP) or Dissemination Information Package (DIP) in digital
object management and delivery scenarios. It is extensible by plugging
in various other extension schemas such as MODS (Metadata Object
Description Schema) for resource description, MIX (Metadata for Images
in XML) for still image technical metadata and PREMIS (PREservation
Metadata Implementation Strategies) for provenance and fixity. The aim
of this article is to describe our journey towards a generic Australian
METS profile that can be used across multiple domains and usage scenarios.
It also describes how the main profile and the sub-profile work together
and what additional profiling work is planned by the National Library of
Australia and its partners to address the needs of the Australian
repository community and (hopefully) of the international community as
well. The Journal Workflow project focussed on the use case of preserving
access to an on-line journal created via the Public Knowledge Project
(PKP) Open Journal System (OJS) application. The Submission Service takes
packaged content (OJS Native XML), performs pre-ingestion processing over
it (transform OJS Native XML into a METS package) and submits it to a
repository. This workflow is customisable via the ability to develop and
configure localised workflow steps within the service. The METS package
is unpacked by the receiving repository and stored in whatever form the
repository requires. In future, the OJS application itself is likely to
support the export of content as a METS package. The Dissemination Service
is available to repositories and makes use of the Digital Repository
Interface (DRI) XML as the standard for representing the repository
objects. In this way any repository able to generate DRI-compliant
markup can store their objects natively but through the Dissemination
Service have them rendered in a common way. Under the Journal Workflow, a
journal stored in DSpace and Fedora native formats (vastly different) can
be given the same look and feel. The Simple Web-service Offering Repository
Deposit (SWORD) project led by UKOLN published its Deposit API not long
after the Submission Service project had concluded. SWORD has been
developed as a profile of the ATOM Publishing Protocol and is agnostic
to workflow or content packaging format. Combining METS with SWORD in
submission workflows is a direction we are currently exploring. METS is
a good fit for reaching our destination. It is, however, one of a long
line of standards developed to meet emerging needs. Standards will
continue to be developed to meet changes in technology and the dynamic
nature of the digital universe.

World Wide Web Consortium Lists: 400,000 Emails

HTML 4.0, XML, PNG, CSS, DOM, and XQuery: These are but a few of the
technologies to come out of the World Wide Web Consortium, commonly
referred to as the W3C. Mark Logic Corporation is proud to announce that
MarkMail has loaded the full W3C public mailing lists. MarkMail in fact
uses all of those W3C technologies. The W3C mailing list archives start
in 1994 and cover 400,000 emails across 200 mailing lists. MarkMail is a
free service for searching mailing list archives, with huge advantages
over traditional search engines. It is powered by MarkLogic Server: Each
email is stored internally as an XML document, and accessed using XQuery.
All searches, faceted navigation, analytic calculations, and HTML page
renderings are performed on a single MarkLogic Server machine running
against millions of messages. The MarkLogic Server is a commercial
enterprise-class XML Content Server built to load, query, manipulate,
and render large amounts of XML using the W3C's XQuery language. In
MarkMail every email is represented and held as an XML document. MarkMail
lets you search millions of emails across thousands of mailing lists.
One may search using keywords as well as "from:", "subject:", "extension:",
and 'list: constraints'. The GUI doesn't yet expose it, but you can
negate any search item, like "-subject:jira". The subdomains are list
constraining, so "tomcat.markmail.org" searches tomcat lists. The "n"
and "p" keyboard shortcuts may be used to navigate the search results.

Friday, March 14, 2008

Microsoft Releasing OOXML SDK

The Office Open XML (OOXML) format may not have gotten ISO's final
blessing as an open standard yet, but Microsoft is finalizing plans
to release a software development kit for it anyway. Microsoft plans
to put out the final beta of the OOXML SDK next month, and release
Version 1.0 in May, according to Doug Mahugh, a technical evangelist
at Microsoft. The final SDK beta and related information will be
available at openxmldeveloper.org, openxmlcommunity.org, and
microsoft.com. The SDK will enable developers to write applications
that can open, read, and otherwise work with OOXML documents, or port
existing applications that work with documents in older Microsoft
formats over to OOXML, Mahugh said. Moreover, the SDK will "put
Microsoft on the hook to keep your app in line with the OOXML standard"
as it changes, he said. For instance, if national members of ISO
decide at the end of this month to approve the OOXML specification --
which has been changed substantially since its failure to pass in
September 2007 -- those changes will be reflected in Version 1.0 of
the SDK, Mahugh said. And Microsoft would continue to update the SDK
to make sure that applications built with it remained compliant with
an Open XML standard as changes were made in the future, he said.
Microsoft first released a Community Technology Preview of the SDK
last June. It is targeted at developers of business intelligence,
content management and other applications in the Office and SharePoint
ecosystem. Microsoft also offers an API for packaging OOXML for
developers who need "more low-level control" over their code, Doug
Mahugh said. More Information

SMash: Secure Component Model for Cross-Domain Mashups on Unmodified

This 13-page paper addresses the problem of securing mashup applications
which mix active content from different trust domains. It is an extended
version of the paper prepared for presentation at the Seventeenth
International World Wide Web Conference (WWW2008), to be held on April
21-25, 2008 in Beijing, China. "Mashup applications mix and merge content
(data and code) from multiple content providers in a user's browser, to
provide high-value web applications that can rival the user experience
provided by desktop applications. Current browser security models were
not designed to support such applications and they are therefore
implemented with insecure workarounds. In our project SMash, we present
a secure component model, where components are provided by different
trust domains, and can interact using a communication abstraction that
allows ease of specification of a security policy. We propose a secure
component model comprising a central event communication hub and governed
communication channels which mediate the communication between isolated
components. We illustrate how such a model can be used to enforce basic
access control policies which define the allowed interactions between
components. We here describe SMash, an implementation of this model on
current browsers, which can be used right away in building secure mashup
applications. Our implementation depends on iframes for isolation while
bootstrapping a publish-subscribe model of communication using URL
fragment identifiers. Our programming model is intentionally general
enough that other communication techniques could be used instead of URL
fragments. SMash is resilient to attacks such as channel spying, message
forging, and frame-phishing. We have evaluated our implementation and
find that it scales well with increasing number of components in the
mashup, and has enough data throughput to be useful in a number of
mashup application scenarios. Our implementation is available as an
open-source JavaScript library."

IBM Moves on Secure Mashups: SMash Contributed to OpenAjax Alliance

IBM is unveiling technology to secure mashups and is donating it to
the OpenAjax Alliance, an organization promoting AJAX (Asynchronous
JavaScript and XML) interoperability. Mashups are defined by IBM as
Web applications that pull information from multiple sources such as
Web sites, enterprise databases, and e-mail to present a single view.
But mashups have been beset by security risks. Through IBM's SMash
(secure mashup) technology, information from different sources can
communicate with each other, but the sources are kept separate to
prevent the spread of malicious code. SMash keeps code and data from
each of the sources separated while allowing controlled sharing of
data through a secure communication channel. The technology is being
donated to the OpenAjax Alliance and is to become part of OpenAjax
Hub 1.1, which goes to general release in June, according to David
Boloker, CTO of emerging Internet technologies in the IBM software
group. Once available, SMash can be used in Web pages in mashups.
Jeffrey Hammond, senior analyst for application development at
Forrester Research: "This client-side cross-domain access pattern is
becoming increasingly popular when developers want to mix in
technology from multiple sites, but don't feel comfortable importing
that code into their server domains. Building on top of OpenAjax Hub
is a strength of SMash." The 'smash provider' is described in the
"OpenAjax Hub 1.1 Specification Managed Hub Overview" based upon an
IBM research paper (to be published in the WWW2008 Proceedings):
"The smash provider allows for secure inclusion of untrusted widgets
within a mashup. (1) Widgets are placed into IFRAMEs that have a
different subdomain than the mashup container application and the
other widgets. This technique leverages the same-domain policy that
is implemented in today's popular browsers whereby the browser disallows
JavaScript or DOM bridging between different-domain IFRAMEs. (2)
Inter-widget communication happens through a particular mechanism
(the window.location fragment identifier, aka "IFrame Proxy" technique)
that can be shared among the IFRAMEs. Note that the SMash techniques
sets up the IFRAMEs such that all communication via IFrame proxies is
mediated by the mashup container application, which prevents widgets
from listening in on the SMash communication channel..."