tag:blogger.com,1999:blog-60631710997640789882024-03-07T21:41:18.547-08:00XMLSajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.comBlogger656125tag:blogger.com,1999:blog-6063171099764078988.post-6496253660790396962010-12-25T08:26:00.000-08:002010-12-25T08:26:12.475-08:00XHTML Developer, ASP.NET Developer, Web Developer<a href="http://www.sajjadh.com/">XHTML Developer, ASP.NET Developer, Web Developer</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-40203561205449166252010-08-13T23:52:00.000-07:002010-08-13T23:54:29.085-07:00Cloud Computing, SOA and Windows Azure"The Windows Azure platform is an Internet-scale cloud computing<br />services platform hosted in Microsoft data centers. Windows tools<br />provide functionality to build solutions that include a cloud services<br />operating system and a set of developer services. The key parts of the<br />Windows Azure platform are: Windows Azure -- application container,<br />Microsoft SQL Azure, and Windows Azure platform AppFabric<br /><br />The Windows Azure platform is part of the Microsoft cloud, which<br />consists of multiple categories of services: (1) Cloud-based<br />applications: These are services that are always available and highly<br />scalable. They run in the Microsoft cloud that consumers can directly<br />utilize. Examples include Bing, Windows Live Hotmail, Office.<br />(2) Software services: These services are hosted instances of<br />Microsoft's enterprise server products that consumers can use directly.<br />Examples include Exchange Online, SharePoint Online, Office<br />Communications Online, etc. (3) Platform services: This is where the<br />Windows Azure platform itself is positioned. It serves as an application<br />platform public cloud that developers can use to deploy next-generation,<br />Internet-scale, and always available solutions. (4) Infrastructure<br />services: There is a limited set of elements of the Windows Azure<br />platform that can support cloud-based infrastructure resources.<br /><br />SQL Azure is a cloud-based relational database service built on SQL<br />Server technologies that exposes a fault-tolerant, scalable, and<br />multi-tenant database service. SQL Azure does not exist as hosted<br />instances of SQL Server. It also uses a cloud fabric layer to abstract<br />and encapsulate the underlying technologies required for provisioning,<br />server administration, patching, health monitoring, and lifecycle<br />management.<br /><br />Summary of Key Points: (1) The Windows Azure platform is primarily a<br />PaaS deployed in a public cloud managed by Microsoft. (2) Windows Azure<br />platform provides a distinct set of capabilities suitable for building<br />scalable and reliable cloud-based services. (3) The overall Windows<br />Azure platform further encompasses SQL Azure and Windows Azure platform<br />AppFabric." <a href="http://queue.acm.org/detail.cfm?id=1841832">More Info</a> <a href="http://xml.coverpages.org/healthcare.html">See also XML in Clinical Research and Healthcare Industries</a>:Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com8tag:blogger.com,1999:blog-6063171099764078988.post-73872674906449648342010-08-13T23:51:00.000-07:002010-08-13T23:52:53.361-07:00Computers in Patient Care: The Promise and the Challenge"Why is it that in terms of automating medical information, we are<br />still attempting to implement concepts that are decades old? With all<br />of the computerization of so many aspects of our daily lives, medical<br />informatics has had limited impact on day-to-day patient care. We have<br />witnessed slow progress in using technology to gather, process, and<br />disseminate patient information, to guide medical practitioners in<br />their provision of care and to couple them to appropriate medical<br />information for their patients' care...<br /><br />The first challenge in applying medical informatics to the daily<br />practice of care is to decide how computerization can help patient care<br />and to determine the necessary steps to achieve that goal. Several<br />other early attempts were made to apply computerization to health<br />care. Most were mainframe-based, driving 'dumb' terminals. Many dealt<br />only with the low-hanging fruit of patient order entry and results<br />reporting, with little or no additional clinical data entry. Also,<br />many systems did not attempt to interface with the information<br />originator (e.g., physician) but rather delegated the system use to<br />a hospital ward clerk or nurse, thereby negating the possibility of<br />providing medical guidance to the physician, such as a warning about<br />the dangers of using a specific drug.<br /><br />We have made significant technological advances that solve many of<br />these early shortcomings. Availability of mass storage is no longer a<br />significant issue. Starting with a 7-MB-per-freezer-size-disk drive<br />(which was not very reliable), we now have enterprise storage systems<br />providing extremely large amounts of storage for less than $1 per<br />gigabyte, and they don't take up an entire room. This advance in<br />storage has been accompanied by a concomitant series of advances in<br />file structures, database design, and database maintenance utilities,<br />greatly simplifying and accelerating data access and maintenance.<br />[But] if we truly want to develop an information utility for<br />health-care delivery in an acute care setting (such as an intensive<br />care unit or emergency department), we need to strive for overall<br />system reliability at least on the order of our electric power grid...<br /><br />One significant issue is the balkanization of medical computerization.<br />Historically, there has been little appreciation of the need for an<br />overall system. Instead we have a proliferation of systems that do<br />not integrate well with each other. For example, a patient who is<br />cared for in my emergency department may have his/her data spread<br />across nine different systems during a single visit, with varying<br />degrees of integration and communication among these systems: EDIS<br />(emergency department information system), prehospital care (ambulance)<br />documentation system, the hospital ADT (admission/discharge/transfer)<br />system, computerized clinical laboratory system, electronic data<br />management (medical records) imaging system, hospital pharmacy system,<br />vital-signs monitoring system, hospital radiology ordering system,<br />and PACS system...." More Info<a href="http://queue.acm.org/detail.cfm?id=1841832"></a> <a href="http://xml.coverpages.org/healthcare.html">See also XML in Clinical Research and Healthcare Industries:</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-31729275856940496592010-08-13T23:50:00.000-07:002010-08-13T23:51:24.780-07:00IETF Approves Symmetric Key Package Content Type SpecificationThe Internet Engineering Steering Group (IESG) has announced approval<br />of the "Symmetric Key Package Content Type" Specification as an IETF<br />Proposed Standard. Hannes Tschofenig is the document shepherd for this<br />document, and Tim Polk is the IETF Responsible Area Director. The<br />specification was produced by members of the IETF Provisioning of<br />Symmetric Keys (KEYPROV) Working Group.<br /><br />"This document provides the ASN.1 variant of the Portable Symmetric Key<br />Container (PSKC), which is defined using XML in the I-D 'Portable<br />Symmetric Key Container (PSKC)' The symmetric key container defines a<br />transport independent mechanism for one or more symmetric keys as well<br />as any associated attributes. The container by itself is insecure; it<br />can be secured using either the Dynamic Symmetric Key Provisioning<br />Protocol (DSKPP) or a CMS protecting content types, per RFC 5652. In<br />addition to the key container, this document also defines ASN.1 version<br />of the XML elements and attributes defined in PSKC.<br /><br />Working Group Summary: The WG agreed that this container would be the<br />optional container, but there was a contingent (both in the WG and in<br />the IEEE) that wanted the ASN.1 container. The format for the container<br />has been stable since version -02. The ASN.1 converted XML elements<br />and attributes were added in the last version to ensure alignment with<br />PSKC.<br /><br />Document Quality: The text of this document is derived from the XML<br />elements and attributes defined in draft-ietf-keyprov-pskc. As such,<br />this document represents the ASN.1 based version of the XML-based<br />counterpart. More Info<a href="http://xml.coverpages.org/draft-ietf-keyprov-symmetrickeyformat-11.txt"></a> <a href="http://xml.coverpages.org/keyManagement.html#ietf-keyprov"> See also the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group:</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-60514462434591020052010-08-13T23:49:00.000-07:002010-08-13T23:50:11.782-07:00Building an AtomPub Server Using WCF Data ServicesOData (odata.org) builds on the HTTP-based goodness of Atom for<br />publishing data; AtomPub for creating, updating and deleting data;<br />and the Microsoft Entity Data Model (EDM) for defining the types of<br />data.<br /><br />If you have a JavaScript client, you can get the data back directly in<br />JSON instead of Atom format, and if you've got something else --<br />including Excel, the .Microsoft NET Framework, PHP, AJAX and more --<br />there are client libraries for forming OData requests and consuming<br />OData responses.<br /><br />If you're using the .NET Framework on the server side, Microsoft also<br />provides an easy-to-use library called WCF Data Services for exposing<br />.NET Framework types or databases supported by the Microsoft Entity<br />Framework as OData sources. This makes it easy to expose your data<br />over the Internet in an HTTP- and standards-based way.<br /><br />[However] there are some things that you might like to do with OData<br />that aren't quite part of the out-of-box experience, such as integrating<br />OData with existing Atom- and AtomPub-based readers and writers..." <a href="http://msdn.microsoft.com/en-us/magazine/ff872392.aspx">More Info</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-74709865130541961912010-08-13T23:48:00.000-07:002010-08-13T23:49:23.158-07:00Computing Cloud Seen as Answer for Consolidated Audit Trail"FTEN, a supplier of risk management software to bulge bracket firms on<br />Wall Street has proposed that the Securities and Exchange Commission<br />rely on real-time data stored in a nationwide cloud of computing power<br />and networks to create an effective audit trail of stock market activity.<br /><br />FTEN provides risk management, routing, surveillance, compliance and<br />market data services to market participants. The firm proposed in a<br />letter to the SEC look to already deployed and commercially available<br />systems that capture order and execution data in real-time from stock<br />exchanges, electronic communication networks, alternative trading systems<br />and dark pools to start creating the trail.<br /><br />The data from all markets then could be mapped back to a unified<br />format that would create a normalized set of data that regulators<br />could review in real time for signs of market disruptions or abuse...<br /><br />Ted Myerson, FTEN CEO said FTEN's commercially deployed At-Trade secure<br />data cloud already aggregages data from 50 sources, with a wide variety<br />of symbol directories, unifies it into a common format and feeds it back<br />to private firms... FTEN says it provides real-time risk management and<br />surveillance on as many as 17 billion shares of stock a day in the<br />United States. That, it says, equates to risk calculations involving<br />$150 billion worth of shares a day... FTEN did not put a price tag on<br />what it would take the securities industry to build out a consolidated<br />audit trail system based on its At-Trade cloud of compute power and<br />online data..." <a href="http://www.information-management.com/news/SEC_real_time_data_cloud_audit_trail-10018508-1.html">More Info</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-8886245681917589112010-08-13T23:46:00.000-07:002010-08-13T23:48:16.755-07:00The Arrival of HTML 5: Lots of New Features, All Eagerly Awaited"HTML (Hyper Text Markup Language) is one of the underpinnings<br />technologies of the modern web with the lion's share of web users'<br />Internet activities founded on it. HTML now stands on the brink of<br />the next change -- the coming of HTML 5. At present, the Internet<br />already contains a handful of HTML 5 specification outlines which<br />partially cover HTML 5 features and conceptions. In this article, we<br />review the current state of HTML and describe the most significant<br />HTML 5 innovations.<br /><br />Offline Potential: Some time ago, a new specification for client-side<br />database support with interesting applications was introduced. While<br />this feature had vast potential, it has been excluded from current<br />specification drafts due to insufficient interest from vendors which<br />use various SQL back-ends. As such, the only offline feature currently<br />available in HTML 5 is flexible online/offline resources management<br />using cache manifests. Cache manifests allow an author of a document<br />to specify which referenced resources must be cached in browser data<br />store (e.g., static images, external CSS and JavaScript files) and<br />which must be retrieved from a server (e.g., time-sensitive data like<br />stock price graphs, responses from web services invoked from within<br />JavaScript). The manifest also provides means for specifying fallback<br />offline replacements for resources which must not be cached. This<br />mechanism gives the ability to compose HTML documents which can be<br />viewed offline.<br /><br />REST in Forms: REST application can be characterized by a clear<br />separation between clients and servers, stateless communications with<br />the server (no client context is stored on the server between requests)<br />and uniform client-server protocol that can be easily invoked from other<br />clients. Applied to HTTP, it encourages usage of URI for identifying<br />all entities and standard HTTP methods like GET (retrieve), POST (change),<br />PUT (add) and DELETE (remove) for entity operations. HTML 5 now fully<br />supports issuing PUT and DELETE requests from HTML forms without any<br />workarounds. This is an unobtrusive, but ideologically important<br />innovation which brings more elegance into web architecture and simplifies<br />development of HTML UI for REST services.<br /><br />Communicating Documents: Now documents opened in browsers can exchange<br />data using messages. Such data exchange may be useful on a web page<br />that includes several frames with the data loaded from different origins.<br />Usually, a browser does not allow JavaScript code to access/manipulate<br />the objects of other documents opened from a different origin. This is<br />done to prevent cross-site scripting and other malicious and destructive<br />endeavors..." <a href="http://www.drdobbs.com/article/printableArticle.jhtml?articleId=226700204">More Info</a> <a href="http://dev.w3.org/html5/html4-differences/">See also HTML5 differences from HTML4:</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-8231081733183578982010-08-13T23:45:00.000-07:002010-08-13T23:46:34.702-07:00Members of the W3C Device APIs and Policy Working Group have published<br />a First Public Working Draft for "The Messaging API". The WG was<br />chartered to create client-side APIs that enable the development of Web<br />Applications and Web Widgets that interact with devices services such<br />as Calendar, Contacts, Camera... This document "represents the early<br />consensus of the group on the scope and features of the proposed<br />Messaging API; in particular, the group intends to work on messages<br />management (move, delete, copy, etc.) in a separate specification.<br />Issues and editors note in the document highlight some of the points<br />on which the group is still working and would particularly like to<br />receive feedback.<br /><br />The Messaging API specification defines a high-level interface to<br />Messaging functionality, including SMS, MMS and Email. It includes<br />APIs to create, send and receive messages. The specification does not<br />replace RFCs for Mail or SMS URLs, but includes complementary<br />functionality to these.<br /><br />Security: The API defined in this specification can be used to create<br />and subscribe for incoming messages through different technologies.<br />Sending messages usually have a cost associated to them, especially<br />SMSs and MMSs. Furthermore this cost may depend on the message attributes<br />(e.g. destination address) or external conditions (e.g. roaming status).<br />Apart from billing implications, there are also privacy considerations<br />due to the capability to access message contents. A conforming<br />implementation of this specification must provide a mechanism that<br />protects the user's privacy and this mechanism should ensure that no<br />message is sent or no subscription is establisehd without the user's<br />express permission.<br /><br />A user agent must not send messages or subscribe for incoming ones<br />without the express permission of the user. A user agent must acquire<br />permission through a user interface, unless they have prearranged<br />trust relationships with users, as described below. The user interface<br />must include the URI of the document origin, as defined in HTML 5... A<br />user agent may have prearranged trust relationships that do not require<br />such user interfaces. For example, while a Web browser will present a<br />user interface when a Web site request an SMS subscription, a Widget<br />Runtime may have a prearranged, delegated security relationship with<br />the user and, as such, a suitable alternative security and privacy<br />mechanism with which to authorize that operation...." <a href="http://www.w3.org/TR/2010/WD-messaging-api-20100810/">More Infor</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-28607384271311780232010-03-18T22:31:00.001-07:002010-03-18T22:31:47.486-07:00Public Data: Translating Existing Models to RDF"As we encourage linked data adoption within the UK public sector,something we run into again and again is that (unsurprisingly) particulardomain areas have pre-existing standard ways of thinking about the datathat they care about. There are existing models, often with multipleserialisations, such as in XML and a text-based form, that are supportedby existing tool chains. In contrast, if there is existing RDF in thatdomain area, it's usually been designed by people who are more interestedin the RDF than in the domain area, and is thus generally more focusedon the goals of the typical casual data re-user rather than theprofessionals in the area...<br />To give an example, the international statistics community uses SDMXfor representing and exchanging statistics... SDMX includes a well-thoughtthrough model for statistical datasets and the observations within them,as well as standard concepts for things like gender, age, unit multipliersand so on. By comparison, SCOVO, the main RDF model for representingstatistics, barely scratches the surface in comparison. This isn't theonly example: the INSPIRE Directive defines how geographic informationmust be made available. GEMINI defines the kind of geospatial metadatathat that community cares about. The Open Provenance Model is the resultof many contributors from multiple fields, and again has a number ofserialisations.<br />You could view this as a challenge: experts in their domains already havemodels and serialisations for the data that they care about; how can wepersuade them to adopt an RDF model and serialisations instead? Butthat's totally the wrong question. Linked data doesn't, can't and won'treplace existing ways of handling data. The question is really abouthow to enable people to reap these benefits; the answer, becauseHTTP-based addressing and typed linkage is usually hard to introduceinto existing formats, is usually to publish data using an RDF-basedmodel alongside existing formats. This might be done by generating anRDF-based format (such as RDF/XML or Turtle) as an alternative to thestandard XML or HTML, accessible via content negotiation, or byproviding a GRDDL transformation that maps an XML format into RDF/XML...<br />Modelling is a complex design activity, and you're best off avoidingdoing it if you can. That means reusing conceptual models that have beenbuilt up for a domain as much as possible and reusing existing vocabularieswherever you can. But you can't and shouldn't try to avoid doing designwhen mapping from a conceptual model to a particular modelling paradigmsuch as a relational, object-oriented, XML or RDF model. If you'remapping to RDF, remember to take advantage of what it's good at suchas web-scale addressing and extensibility, and always bear in mind howeasy or difficult your data will be to query. There is no pointpublishing linked data if it is unusable..."<br /><a href="http://www.jenitennison.com/blog/node/142">http://www.jenitennison.com/blog/node/142</a>See also Linked Data: <a href="http://www.w3.org/standards/semanticweb/data">http://www.w3.org/standards/semanticweb/data</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-24954329430667990922010-03-18T22:30:00.000-07:002010-03-18T22:31:24.604-07:00There is REST for the Weary DeveloperThis brief article provides an example of working with theRepresentational State Transfer style of software architecture. REST(Representational State Transfer) is a style of software architecturefor accessing information on the Web. The RESTful service refers toweb services as resources that use XML over the HTTP protocol. Theterm REST dates back to 2000, when Roy Fielding used it in his doctoraldissertation. The W3C recommends using WSDL 2.0 as the language fordefining REST web services. To explain REST, we take an example ofpurchasing items from a catalog application...<br />First we will define CRUD operations for this service as following. Theterm CRUD stands for basic database operations Create, Read, Update, andDelete. In the example, you can see that creating a new item with Idis not supported. When a request for new item is received, Id is createdand assigned to the new item. Also, we are not supporting the updateand delete operations for the collection of items. Update and delete aresupported for the individual items...<br />Interface documents: How does the client know what to expect in returnwhen it makes a call for CRUD operations? The answer is the interfacedocument. In this document you can define the CRUD operation mapping,Item.xsd file, and request and response XML. You can have separate XSDfor request and response, or response can have text such as 'success'in return for the methods other than GET...<br />There are other frameworks available for RESTful Services. Some of themare listed here: Sun reference implementation for JAX-RS code-namedJersey, where Jersey uses a HTTP web server called Grizzly, and theServlet Grizzly Servlet; Ruby on Rails; Restlet; Django; Axis2a.<br /><a href="http://www.devx.com/architect/Article/44341">http://www.devx.com/architect/Article/44341</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-18711901062878917282010-03-18T22:29:00.004-07:002010-03-18T22:30:20.752-07:00Now IBM's Getting Serious About Public IaaSJames Staten, Forrester Blog<br /><br />"IBM has been talking a good cloud game for the last year or so. Theyhave clearly demonstrated that they understand what cloud computingis, what customers want from it and have put forth a variety of offeringsand engagements to help customers head down this path -- mostly throughinternal cloud and strategic rightsourcing options.<br />But its public cloud efforts, outside of application hosting have beena bit of wait and see. Well the company is clearly getting its acttogether in the public cloud space with today's announcement of theSmart Business Development and Test Cloud, a credible public Infrastructureas a Service (IaaS) offering. This new service is an extension of itsdeveloperWorks platform and gives its users a virtual environment throughwhich they can assemble, integrate and validate new applications. Pricingon the service is as you would expect from an IaaS offering, and freefor a limited time...<br />Certainly any IaaS can be used for test and development purposes so IBMisn't breaking new ground here. But its off to a solid start with statedsupport from test and dev specialist partners SOASTA, VMLogix, AppFirstand Trinity Software bring their tools to the IBM test cloud..."<br /><a href="http://blogs.forrester.com/james_staten/10-03-16-now_ibm%E2%80%99s_getting_serious_about_public_iaas">http://blogs.forrester.com/james_staten/10-03-16-now_ibm%E2%80%99s_getting_serious_about_public_iaas</a>See also Jeffrey Schwartz in GCN: <a href="http://gcn.com/articles/2010/03/17/ibm-public-cloud-service.aspx">http://gcn.com/articles/2010/03/17/ibm-public-cloud-service.aspx</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-12359798684055203852010-03-18T22:29:00.003-07:002010-03-18T22:29:52.682-07:00Aggregative Digital Libraries: D-NET Software Toolkit and OAIster System"Aggregative Digital Library Systems (ADLSs) provide end users with webportals to operate over an information space of descriptive metadatarecords, collected and aggregated from a pool of possibly heterogeneousrepositories. Due to the costs of software realization and systemmaintenance, existing "traditional" ADLS solutions are not easilysustainable over time for the supporting organizations. Recently, theDRIVER EC project proposed a new approach to ADLS construction, basedon Service-Oriented Infrastructures. The resulting D-NET software toolkitenables a running, distributed system in which one or multipleorganizations can collaboratively build and maintain theirservice-oriented ADLSs in a sustainable way. Aggregative Digital LibrarySystems (ADLSs) typically address two main challenges: (1) populating aninformation space of metadata records by harvesting and normalizingrecords from several OAI-PMH compatible repositories; and (2) providingportals to deliver the functionalities required by the user communityto operate over the aggregated information space, for example, search,annotations, recommendations, collections, user profiling, etc.<br />Repositories are defined here as software systems that typically offerfunctionalities for storing and accessing research publications andrelative metadata information. Access usually has the twofold form ofsearch through a web portal and bulk metadata retrieval through OAI-PMHinterfaces. In recent years, research institutions, university libraries,and other organizations have been increasingly setting up repositoryinstallations (based on technologies such as Fedora, ePrints, DSpace,Greenstone, OpenDlib, etc) to improve the impact and visibility of theiruser communities' research outcomes.<br />In this paper, we advocate that D-NET's 'infrastructural' approach toADLS realization and maintenance proves to be generally more sustainablethan 'traditional' ones. To demonstrate our thesis, we report on thesustainability of the 'traditional' OAIster System ADLS, based on DLXSsoftware (University of Michigan), and those of the 'infrastructural'DRIVER ADLS, based on D-NET.<br />As an exemplar of traditional solutions we rely on the well-known OAIsterSystem, whose technology was realized at the University of Michigan.The analysis will show that constructing static or evolving ADLSs usingD-NET can notably reduce software realization costs and that, forevolving requirements, refinement costs for maintenance can be mademore sustainable over time..."<br /><a href="http://www.dlib.org/dlib/march10/manghi/03manghi.html">http://www.dlib.org/dlib/march10/manghi/03manghi.html</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-2030983662105745322010-03-18T22:29:00.001-07:002010-03-18T22:29:30.832-07:00Definitions for Expressing Standards Requirements in IANA RegistriesThe Internet Engineering Steering Group (IESG) has received a requestto consider the specification "Definitions for Expressing StandardsRequirements in IANA Registries" as a Best Current Practice RFC (BCP).The IESG plans to make a decision in the next few weeks, and solicitsfinal comments on this action; please send substantive comments to theIETF mailing lists by 2010-04-14.<br />Abstract: "RFC 2119 defines words that are used in IETF standardsdocuments to indicate standards compliance. These words are fine fordefining new protocols, but there are certain deficiencies in usingthem when it comes to protocol maintainability. Protocols are maintainedby either updating the core specifications or via changes in protocolregistries. For example, security functionality in protocols oftenrelies upon cryptographic algorithms that are defined in externaldocuments. Cryptographic algorithms have a limited life span, and newalgorithms regularly phased in to replace older algorithms. This documentproposes standard terms to use in protocol registries and possibly instandards track and informational documents to indicate the life cyclesupport of protocol features and operations.<br />The proposed requirement words for IANA protocol registries include thefollowing. (1) MANDATORY This is the strongest requirement and for animplementation to ignore it there MUST be a valid and serious reason.(2) DISCRETIONARY, for Implementations: Any implementation MAY or MAYNOT support this entry in the protocol registry. The presence oromission of this MUST NOT be used to judge implementations on standardscompliance (and for) Operations: Any use of this registry entry inoperation is supported, ignoring or rejecting requests using this protocolcomponent MUST NOT be used as bases for asserting lack of compliance.(3) OBSOLETE for Implementations means new implementations SHOULD NOTsupport this functionality, and for Operations, means any use of thisfunctionality in operation MUST be phased out. (4) ENCOURAGED: Thisword is added to the registry entry when new functionality is added andbefore it is safe to rely solely on it. Protocols that have the abilityto negotiate capabilities MAY NOT need this state. (5) DISCOURAGED meansthis requirement is placed on an existing function that is being phasedout. This is similar in spirit to both MUST- and SHOULD- as defined andused in certain RFC's such as RFC 4835. (6) RESERVED: Sometimes thereis a need to reserve certain values to avoid problems such as valuesthat have been used in implementations but were never formally registered.In other cases reserved values are magic numbers that may be used inthe future as escape valves if the number space becomes too small. (7)AVAILABLE is a value that can be allocated by IANA at any time..."<br />This document is motivated by the experiences of the editors in tryingto maintain registries for DNS and DNSSEC. For example, DNS defines aregistry for hash algorithms used for a message authentication schemecalled TSIG, the first entry in that registry was for HMAC-MD5. TheDNSEXT working group decided to try to decrease the number of algorithmslisted in the registry and add a column to the registry listing therequirements level for each one. Upon reading that HMAC-MD5 was taggedas 'OBSOLETE' a firestorm started. It was interpreted as the DNScommunity making a statement on the status of HMAC-MD5 for all uses.<br /><a href="http://xml.coverpages.org/draft-ogud-iana-protocol-maintenance-words-03.txt">http://xml.coverpages.org/draft-ogud-iana-protocol-maintenance-words-03.txt</a>See also 'Using MUST and SHOULD and MAY': <a href="http://www.ietf.org/tao.html#anchor42">http://www.ietf.org/tao.html#anchor42</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-61844588152349940262010-03-18T22:28:00.002-07:002010-03-18T22:29:07.877-07:00New Models of Human Language to Support Mobile Conversational SystemsW3C has announced a Workshop on Conversational Applications: Use Casesand Requirements for New Models of Human Language to Support MobileConversational Systems. The workshop will be held June 18-19, 2010in New Jersey, US, hosted by Openstream. The main outcome of theworkshop will be the publication of a document that will serve as aguide for improving the W3C language model. W3C membership is notrequired to participate in this workshop. The current program committeeconsists of: Paolo Baggia (Loquendo), Daniel C. Burnett (Voxeo),Deborah Dahl (W3C Invited Expert), Kurt Fuqua (Cambridge Mobile),Richard Ishida (W3C), Michael Johnston (AT&T), James A. Larson (W3CInvited Expert), Sol Lerner (Nuance), David Nahamoo (IBM), Dave Raggett(W3C), Henry Thompson (W3C/University of Edinburgh), and Raj Tumuluri(Openstream).<br />"A number of developers of conversational voice applications feel thatthe model of human language currently supported by W3C standards suchas SRGS, SISR and PLS is not adequate and that developers need newcapabilities in order to support more sophisticated conversationalapplications. The goal of the workshop therefore is to understand thelimitations of the current W3C language model in order to develop amore comprehensive model. We plan to collect and analyze use cases andprioritize requirements that ultimately will be used to identifyimprovements to the W3C language model. Just as W3C developed SSML 1.1to broaden the languages for which SSML is useful, this effort willresult in improved support for language capabilities that areunsupported today.<br />Suggested Workshop topics for position papers include: (1) Use casesand requirements for grammar formalisms more powerful than SRGS'scontext free grammars that are needed to implement tomorrow'sapplications (2) What are the common aspects of human language modelsfor different languages that can be factored into reusable modules?(3) Use cases and requirements for realigning/extending SRGS, PLS andSISR to support more powerful human language models (4) Use cases andrequirements for sharing grammars among concurrent applications (5) Usecases that illustrate requirements for natural language capabilitiesfor conversational dialog systems that cannot easily be implementedusing the current W3C conversational language model. (6) Use cases andrequirements for speech-enabled applications that can be used acrossmultiple languages (English, German, Spanish, ...) with only minormodifications. (7) Use cases and requirements for composing thebehaviors of multiple speech-enabled applications that were developedindependently without requiring changes to the applications. (8) Usecases and requirements motivating the need to resolve ellipses andanaphoric references to previous utterances.<br />Position papers, due April 2, 2010, must describe requirements and usecases for improving W3C standards for conversational interaction andhow the use cases justify one or more of these topics: Formal notationsfor representing grammar in: Syntax, Morphology, Phonology, Prosodics;Engine standards for improvement in processing: Syntax, Morphology,Phonology, Lexicography; Lexicography standards for: parts-of-speech,grammatical features and polysemy; Formal semantic representation ofhuman language including: verbal tense, aspect, valency, plurality,pronouns, adverbs; Efficient data structures for binary representationand passing of: parse trees, alternate lexical/morphologic analysis,alternate phonologic analysis; Other suggested areas or improvementsfor standards based conversational systems development..."<br /><a href="http://www.w3.org/2010/02/convapps/cfp">http://www.w3.org/2010/02/convapps/cfp</a>See also W3C Workshops: <a href="http://www.w3.org/2003/08/Workshops/">http://www.w3.org/2003/08/Workshops/</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-24885733112570577182010-03-18T22:28:00.001-07:002010-03-18T22:28:45.552-07:00Integrating Composite Applications on the Cloud Using SCA"Elastic computing has made it possible for organizations to use cloudcomputing and a minimum of computing resources to build and deploy anew generation of applications. Using the capabilities provided bythe cloud, enterprises can quickly create hybrid composite applicationson the cloud using the best practices of service-component architectures(SCA).<br />Since SCA promotes all the best practices used in service-orientedarchitectures (SOA), building composite applications using SCA is oneof the best guidelines for creating cloud-based composite applications.Applications created using several different runtimes running on thecloud can be leveraged to create a new component , as well as hybridcomposite applications which scale on-demand with private/public cloudmodels can also be built using secure transport data channels.<br />In this article, we show how to build and integrate composite applicationsusing Apache Tuscany, the Eucalyptus open source cloud framework, andOpenVPN to create a hybrid composite application. To show that distributedapplications comprising of composite modules (distributed across thecloud and enterprise infrastructure) can be integrated and function asa single unit using SCA without compromising on security, we create acomposite application that components spread over different domainsdistributed across the cloud and the enterprise infrastructure. We thenuse SCA to host and integrate this composite application so that itfulfills the necessary functional requirements. To ensure informationand data security, we set up a virtual private network (VPN) betweenthe different domains (cloud and enterprise), creating a point-to-pointencrypted network which provides secure information exchange betweenthe two environments...<br />This project illustrates that distributed applications comprising ofcomposite modules (distributed across the cloud and EnterpriseInfrastructure) can be integrated and made to function as a single unitusing Service Component Architecture (SCA) without compromising onsecurity..."<br /><a href="http://www.drdobbs.com/web-development/223800269">http://www.drdobbs.com/web-development/223800269</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-54564191575289321652010-03-18T22:27:00.000-07:002010-03-18T22:28:06.373-07:00IETF Update: Specification for a URI TemplateA revised version of the IETF Standards Track Internet Draft "URI Template"has been published. From the abstract: "A URI Template is a compactsequence of characters for describing a range of Uniform ResourceIdentifiers through variable expansion. This specification defines theURI Template syntax and the process for expanding a URI Template intoa URI, along with guidelines for the use of URI Templates on the Internet.<br />Overview: "A Uniform Resource Identifier (URI) is often used to identifya specific resource within a common space of similar resources... URITemplates provide a mechanism for abstracting a space of resourceidentifiers such that the variable parts can be easily identified anddescribed. URI templates can have many uses, including discovery ofavailable services, configuring resource mappings, defining computed links,specifying interfaces, and other forms of programmatic interaction withresources.<br />A URI Template provides both a structural description of a URI space and,when variable values are provided, a simple instruction on how to constructa URI corresponding to those values. A URI Template is transformed intoa URI-reference by replacing each delimited expression with its value asdefined by the expression type and the values of variables named withinthe expression. The expression types range from simple value expansionto multiple key=value lists. The expansions are based on the URI genericsyntax, allowing an implementation to process any URI Template withoutknowing the scheme-specific requirements of every possible resulting URI.<br />A URI Template may be provided in absolute form, as in the examples above,or in relative form if a suitable base URI is defined... A URI Templateis also an IRI template, and the result of template processing can berendered as an IRI by transforming the pct-encoded sequences to theircorresponding Unicode character if the character is not in the reservedset... Parsing a valid URI Template expression does not require buildinga parser from the given ABNF. Instead, the set of allowed characters ineach part of URI Template expression has been chosen to avoid complexparsing, and breaking an expression into its component parts can beachieved by a series of splits of the character string. Example Pythoncode [is planned] that parses a URI Template expression and returns theoperator, argument, and variables as a tuple..."<br /><a href="http://xml.coverpages.org/draft-gregorio-uritemplate-04.txt">http://xml.coverpages.org/draft-gregorio-uritemplate-04.txt</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-71465811487898148522010-03-18T07:33:00.001-07:002010-03-18T07:33:37.667-07:00What Standardization Will Mean For RubyMirko Stocker, InfoQueue<br />Ruby's inventor Matz announced plans to standardize Ruby in order to"improve the compatibility between different Ruby implementations [..]and to ease Ruby's way into the Japanese government". The firstproposal for standardization will be to the Japanese IndustrialStandards Committee and in a further step to the ISO, to become aninternational standard. For now, a first draft (that weighs in at over300 pages) and official announcement are available. Alternatively,there's a wiki under development to make the standard available inHTML format.A very different approach to unite Ruby implementations is theRubySpec project -- a community driven effort to build an executablespecification. RubySpec is an offspring of the Rubinius project...[But] What do our readers think: will it be easier to introduce Rubyin their organizations if there's an ISO standard behind it?"According to RubySpec lead Brian Ford: "I think the ISO Standardizationeffort is very important for Ruby, both for the language and for thecommunity, which in my mind includes the Ruby programmers, people whouse software written in Ruby, and the increasing number of businessesbased on or using software written in Ruby. The Standardization documentand RubySpec are complementary in my view. The document places primaryimportance on describing Ruby in prose with appropriate formattingformalities. The document envisions essentially one definition of Ruby.RubySpec, in contrast, places primary importance on code that demonstratesthe behavior of Ruby. However, RubySpec also emphasizes describing Rubyin prose as an essential element of the executable specification and isthe reason we use RSpec-compatible syntax. RubySpec also attempts tocapture the behavior of the union of all Ruby implementations. Itprovides execution guards that document the specs for differences betweenimplementations. For example, not all platforms used to implement Rubysupport forking a process. So the specs have guards for whichimplementations provide that feature... This illustrates an importantdifference between the ISO Standardization document and RubySpec. TheISO document can simply state that a particular aspect of the languageis "implementation defined" and provide no further guidance. Unfortunately,implementing such a standard can be difficult, as we have seen withthe confusion caused by various browser vendors attempting to implementCSS. RubySpec attempts to squeeze the total number of unspecified Rubybehaviors to the smallest size possible..."<a style="COLOR: rgb(51,51,51)" href="http://www.infoq.com/news/2010/03/ruby-standardization" target="_blank">http://www.infoq.com/news/2010/03/ruby-standardization</a>See also the Ruby Standard Wiki: <a style="COLOR: rgb(51,51,51)" href="http://wiki.ruby-standard.org/wiki/Main_Page" target="_blank">http://wiki.ruby-standard.org/wiki/Main_Page</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com1tag:blogger.com,1999:blog-6063171099764078988.post-89449186625768205862010-03-18T07:32:00.002-07:002010-03-18T07:33:05.951-07:00New Release of Oxygen XML Editor and Oxygen XML Author Supports DITADevelopers of the Oxygen XML Editor and Author toolsuite have announcedthe immediate availability of version 11.2 of the XML Editor and XMLAuthor containing a comprehensive set of tools supporting all the XMLrelated technologies. Oxygen combines content author features like theCSS driven Visual XML editor with a fully featured XML developmentenvironment. It has ready to use support for the main document frameworksDITA, DocBook, TEI and XHTML and also includes support for all XML Schemalanguages, XSLT/XQuery Debuggers, WSDL analyzer, XML Databases, XMLDiff and Merge, Subversion client and more.New features in version 11.2: Version 11.2 of Oxygen XML Editor improvesthe XML authoring, the XML development tools, the support for largedocuments and the SVN Client. The visual XML editing (Author mode) isavailable now as a separate component that can be integrated in Javaapplications or, as an Applet, in Web applications. A sample Webapplication showing the Author component in the browser, as an Applet,editing DITA documents is available...Other XML Author improvements include support for preserving theformatting for unchanged elements and an updated Author API containinga number of new extensions that allow customizing the Outline, theBreadcrumb and the Status Bar. The XSLT Debugger provides more flexibilityand it is the first debugger that can step inside XPath 2.0 expressions.The Saxon 9 EE bundled with Oxygen can be used to run XQuery 1.1transformations. The XProc support was aligned with the recent updateas W3C Proposed Recommendation and includes the latest Calabash XProcprocessor.In 'Author for DITA' there is support for Reusable Components: A fragmentof a topic can be extracted in a separate file for reuse in differenttopics. The component can be reused by inserting an element with a conrefattribute where the content of the component is needed. This works withoutany additional configuration and supports any DITA specialization.Similarly, there's support for Content References Management: The DITAframework includes actions for adding, editing and removing a contentreference (conref, conkeyref, conrefend attributes) to/from an existingelement... A new schema caching mechanism allows to quickly open largeDITA Maps and their referred topics..."<a style="COLOR: rgb(51,51,51)" href="http://www.oxygenxml.com/index.html#new-version" target="_blank">http://www.oxygenxml.com/index.html#new-version</a>See also XML Author Component for the DITA Documentation Framework: <a style="COLOR: rgb(51,51,51)" href="http://www.oxygenxml.com/demo/AuthorDemoApplet/author-component-dita.html" target="_blank">http://www.oxygenxml.com/demo/AuthorDemoApplet/author-component-dita.html</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-3291433883897767012010-03-18T07:32:00.001-07:002010-03-18T07:32:46.776-07:00HTML5, Hardware Accelerated: First IE9 Platform Preview AvailableDean Hachamovitch, Windows Internet Explorer WeblogAt the Las Vegas MIX10 Conference, Microsoft Internet Explorerdevelopers demonstrated "how the standard web patterns that developersalready know and use broadly run better by taking advantage of PChardware through IE9 on Windows." A blog article by Dean Hachamovitchprovides an overview of what we showed, "across performance, standards,hardware-accelerated HTML5 graphics, and the availability of the IE9Platform Preview for developers...First, we showed IE9's new script engine, internally known as 'Chakra,'and the progress we've made on an industry benchmark for JavaScriptperformance... We showed our progress in making the same standards-basedHTML, script, and formatting markup work across different browsers.We shared the data and framework that informed our approach, anddemonstrated better support for several standards: HTML5, DOM, andCSS3. We showed IE9's latest Acid3 score (55); as we make progress onthe industry goal of having the same markup that developers actuallyuse working across browsers, our Acid3 score will continue to go up...In several demonstrations, we showed the significant performance gainsthat graphically rich, interactive web pages enjoy when a browser takesfull advantage of the PC's hardware capabilities through the operatingsystem. The same HTML, script, and CSS markup work across severaldifferent browsers; the pages just run significantly faster in IE9because of hardware-accelerated graphics. IE9 is also the first browserto provide hardware-accelerated SVG support...The goal of standardsand interoperability is that the same HTML, script, and formattingmarkup work the same across different browsers. Eliminating the needfor different code paths for different browsers benefits everyone,and creates more opportunity for developers to innovate.The main technologies to call out here broadly are HTML5, CSS3, DOM,and SVG. The IE9 test drive site has more specifics and samples. Atthis time, we're looking for developer feedback on our implementationof HTML5's parsing rules, Selection APIs, XHTML support, and inlineSVG. Within CSS3, we're looking for developer feedback on IE9's supportfor Selectors, Namespaces, Colors, Values, Backgrounds and Borders,and Fonts. Within DOM, we're looking for developer feedback on IE9'ssupport for Core, Events, Style, and Range... As IE makes more progresson the industry goal of 'same markup' for standards and parts ofstandards that developers actually use, the Acid3 score will continueto go up as a result. A key part of our approach to web standards isthe development of an industry standard test suite. Today, Microsofthas submitted over 100 additional tests of HTML5, CSS3, DOM, and SVGto the W3C..."<a style="COLOR: rgb(51,51,51)" href="http://preview.tinyurl.com/ykceeex" target="_blank">http://preview.tinyurl.com/ykceeex</a>See also Paul Krill's InfoWorld article: <a style="COLOR: rgb(51,51,51)" href="http://www.infoworld.com/d/applications/microsoft-embraces-html5-specification-in-ie9-861" target="_blank">http://www.infoworld.com/d/applications/microsoft-embraces-html5-specification-in-ie9-861</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-46976663160085947482010-03-18T07:31:00.004-07:002010-03-18T07:32:11.117-07:00Open Source of ebMS V3 Message Handler and AS4 Profile on SourceforgeHolodeck is an open source version of ebXML Messaging Version 3 andits AS4 profile is now available on Sourceforge with onlinedocumentation. The ebXML Messaging V3 specification defines acommunications-protocol neutral method for exchanging electronicbusiness messages. It defines specific Web Services-based envelopingconstructs supporting reliable, secure delivery of business information.Furthermore, the specification defines a flexible enveloping technique,permitting messages to contain payloads of any format type...The OASIS specification "AS4 Profile of ebMS V3" abstract: "While ebMS3.0 represents a leap forward in reducing the complexity of Web ServicesB2B messaging, the specification still contains numerous options andcomprehensive alternatives for addressing a variety of scenarios forexchanging data over a Web Services platform. The AS4 profile of theebMS 3.0 specification has been developed in order to bring continuityto the principles and simplicity that made AS2 successful, whileadding better compliance to Web services standards, and features suchas message pulling capability and a built-in Receipt mechanism. UsingebMS 3.0 as a base, a subset of functionality is defined along withimplementation guidelines adopted based on the 'just-enough' designprinciples and AS2 functional requirements to trim down ebMS 3.0 intoa more simplified and AS2-like specification for Web Services B2Bmessaging. This document defines the AS4 profile as a combination ofa conformance profile that concerns an implementation capability, andof a usage profile that concerns how to use this implementation. Acouple of variants are defined for the AS4 conformance profile -- theAS4 ebHandler profile and the AS4 Light Client profile -- that reflectdifferent endpoint capabilities."Holodeck's primary goal is to provide an Open-Source product for B2Bmessaging based on ebXML Messaging version 3 that can be used by ebXMLcommunities as well as WebServices communities. Because ebXML Messagingversion 3 is compatible with webservices, Holodeck provides anintegration of ebXML, webservices and AS4 in one package. Holodeckcan be used in the following scenarios: (1) Pure ebXML messaging inthe B2B or within different departments of the same company. (2)Messaging Gateway to an ESB. The ESB providing an integration withina company, while Holodeck playing the gateway to communicate with theexternal world via messaging. (3) An environment where there is a needfor both Webservice consumption and heavy B2B messaging where webservices fail...Holodeck comes with a scalable architecture: datastore for messages(JDO by default, a MySQL pre-configured option, and interfaces toother databases), and streaming for large messages (based on Axis2streaming). The project is funded and maintained by Fujitsu America,Inc. This package comes with a "no coding necessary" out-of-the-boxexperience and tutorials, allowing you to deploy and test withouthaving to write code up-front, using a directory system as applicationlayer substitute to store as files elements of messages to be sent,and to receive them. Developers can download binaries and source code,and get a fresh copy directly from "Subversion" versioning system...<a style="COLOR: rgb(51,51,51)" href="http://ebxml.xml.org/news/open-source-of-ebms-v3-message-handler-and-its-as4-profile-on-sourceforge" target="_blank">http://ebxml.xml.org/news/open-source-of-ebms-v3-message-handler-and-its-as4-profile-on-sourceforge</a>See also the Holodeck resources from SourceForge: <a style="COLOR: rgb(51,51,51)" href="http://holodeck-b2b.sourceforge.net/" target="_blank">http://holodeck-b2b.sourceforge.net/</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com1tag:blogger.com,1999:blog-6063171099764078988.post-35897959183473831452010-03-18T07:31:00.003-07:002010-03-18T07:31:52.090-07:00IESG Issues Last Call Review for MODS/MADS/METS/MARCXML/SRU Media TypesThe Internet Engineering Steering Group (IESG) has received a requestfrom an individual submitter the following Standards Track I-D as anIETF Proposed Standard: "The Media Types application/mods+xml,application/mads+xml, application/mets+xml, application/marcxml+xml,application/sru+xml." The IESG plans to make a decision in the nextfew weeks, and solicits final comments on this action; please sendsubstantive comments to the IETF lists by 2010-04-12.This document "specifies Media Types for the following formats: MODS(Metadata Object Description Schema), MADS (Metadata AuthorityDescription Schema), METS (Metadata Encoding and Transmission Standard),MARCXML (MARC21 XML Schema), and the SRU (Search/Retrieve via URLResponse Format) Protocol response XML schema. These are all XMLschemas providing representations of various forms of informationincluding metadata and search results.The U.S. Library of Congress, on behalf of and in collaboration withvarious components of the metadata and information retrieval community,has issued specifications which define formats for representation ofvarious forms of information including metadata and search results.This memo provides information about the Media Types associated withseveral of these formats, all of which are XML schemas. (1) 'MODS:Metadata Object Description Schema' is an XML schema for a bibliographicelement set that may be used for a variety of purposes, and particularlyfor library applications. (2) 'MADS: Metadata Authority DescriptionSchema' is an XML schema for an authority element set used to providemetadata about agents (people, organizations), events, and terms(topics, geographics, genres, etc.). It is a companion to the MODSSchema. (3) 'METS: Metadata Encoding and Transmission Standard" definesan XML schema for encoding descriptive, administrative, and structuralmetadata regarding objects within a digital library.(4) 'MARCXML MARC21 XML Schema' is an XML schema for the direct XMLrepresentation of the MARC format (for which there already exists amedia type, application/marc; By 'direct XML representation'is is meantthat it encodes the actual MARC data within XML... (5) 'SRU: Search/Retrieve via URL Response Format' provides an XML schema for the SRUresponse. SRU is a protocol, and the media type 'sru+xml' pertainsspecifically to the default SRU response. the SRU response may besupplied in any of a number of suitable schemas, RSS, ATOM, for example,and the client identifies the desired format in the request, hence theneed for a media type. This mechanism will be introduced in SRU 2.0;in previous versions (that is, all versions to date; 2.0 is indevelopment) all responses are supplied in the existing default format,so no media type was necessary. SRU 2.0 is being developed within OASIS.<a style="COLOR: rgb(51,51,51)" href="http://xml.coverpages.org/draft-denenberg-mods-etc-media-types-01.txt" target="_blank">http://xml.coverpages.org/draft-denenberg-mods-etc-media-types-01.txt</a>See also IANA registration for MIME Media Types: <a style="COLOR: rgb(51,51,51)" href="http://www.iana.org/assignments/media-types/" target="_blank">http://www.iana.org/assignments/media-types/</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-20719296908498491742010-03-18T07:31:00.001-07:002010-03-18T07:31:28.239-07:00OASIS SCA-C-C++ Technical Committee Publishes Two Public Review DraftsBryan Aupperle, David Haney, Pete Robbins (eds), OASIS Review DraftsMembers of the OASIS Service Component Architecture / C and C++(SCA-C-C++) Technical Committee have released two Committee Drafts forpublic review through March 25, 2010. This TC is part of the OASISOpen Composite Services Architecture (Open CSA) Member Section advancesopen standards that simplify SOA application development. Open CSAbrings together vendors and users from around the world to collaborateon standard ways to unify services regardless of programming languageor deployment platform. Open CSA promotes the further development andadoption of the Service Component Architecture (SCA) and Service DataObjects (SDO) families of specifications. SCA helps organizations moreeasily design and transform IT assets into reusable services that canbe rapidly assembled to meet changing business requirements. SDO letsapplication programmers uniformly access and manipulate data fromheterogeneous sources, including relational databases, XML data sources,Web services, and enterprise information systems."Service Component Architecture Client and Implementation Model for C++Specification Version 1.1" describes "the SCA Client and ImplementationModel for the C++ programming language. The SCA C++ implementationmodel describes how to implement SCA components in C++. A componentimplementation itself can also be a client to other services providedby other components or external services. The document describes howa C++ implemented component gets access to services and calls theiroperations. Thisdocument also explains how non-SCA C++ components canbe clients to services provided by other components or external services.The document shows how those non-SCA C++ component implementationsaccess services and call their operations.""Service Component Architecture Client and Implementation Model for CSpecification Version 1.1" describes "the SCA Client and ImplementationModel for the C programming language. The SCA C implementation modeldescribes how to implement SCA components in C. A componentimplementation itself can also be a client to other services providedby other components or external services. The document describes howa component implemented in C gets access to services and calls theiroperations. The document also explains how non-SCA C components canbe clients to services provided by other components or externalservices. The document shows how those non-SCA C componentimplementations access services and call their operations."The OASIS SCA-C-C++ TC is developing "the C and C++ programming modelfor clients and component implementations using the Service ComponentArchitectire (SCA). SCA defines a model for the creation of businesssolutions using a Service-Oriented Architecture, based on the conceptof Service Components which offer services and which make referencesto other services. SCA models business solutions as compositions ofgroups of service components, wired together in a configuration thatsatisfies the business goals. SCA applies aspects such as communicationmethods and policies for infrastructure capabilities such as securityand transactions through metadata attached to the compositions."<a style="COLOR: rgb(51,51,51)" href="http://docs.oasis-open.org/opencsa/sca-c-cpp/sca-cppcni-1.1-spec-cd05.html" target="_blank">http://docs.oasis-open.org/opencsa/sca-c-cpp/sca-cppcni-1.1-spec-cd05.html</a>See also the Model for C specification: <a style="COLOR: rgb(51,51,51)" href="http://docs.oasis-open.org/opencsa/sca-c-cpp/sca-ccni-1.1-spec-cd05.html" target="_blank">http://docs.oasis-open.org/opencsa/sca-c-cpp/sca-ccni-1.1-spec-cd05.html</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-39780189927896082762010-03-18T07:30:00.003-07:002010-03-18T07:30:57.615-07:00Early Draft Review for JSR-310 Specification: Date and Time APIStephen Colebourne, Michael Nascimento Santos (et al, eds), JSR DraftProject editors for Java Specification Request 310: Date and Time APIhave published an Early Draft Review (EDR) to to gain feedback on anearly version of the JSR. The contents of the EDR are the prosespecification and the javadoc. According to the original publishedRequest, JSR 310 "will provide a new and improved date and time API forJava. The main goal is to build upon the lessons learned from the firsttwo APIs (Date and Calendar) in Java SE, providing a more advanced andcomprehensive model for date and time manipulation.The new API will be targeted at all applications needing a data modelfor dates and times. This model will go beyond classes to replace Dateand Calendar, to include representations of date without time, timewithout date, durations and intervals. This will raise the quality ofapplication code. For example, instead of using an int to store aduration, and javadoc to describe it as being a number of days, thedate and time model will provide a class defining it unambiguously.The new API will also tackle related date and time issues. These includeformatting and parsing, taking into account the ISO8601 standard andits implementations, such as XML. In addition, the areas of serializationand persistence will be considered... In this specification model,dates and times are separated into two basic use cases: machine-scaleand human-scale. Machine-scale time represents the passage of timeusing a single, continually incrementing number. The rules thatdetermine how the scale is measured and communicated are typicallydefined by international scientific standards organisations. Human-scaletime represents the passage of time using a number of named fields,such as year, month, day, hour, minute and second. The rules thatdetermine how the fields work together are defined in a calendar system...From the specification introduction: "Many Java applications requirelogic to store and manipulate dates and times. At present, Java SEprovides a number of disparate APIs for this purpose, including Date,Calendar, SQL Date/Time/Timestamp and XML Duration/XMLGregorianCalendar.Unfortunately, these APIs are not all particularly well-designed andthey do not cover many use cases needed by developers. As an example,Java developers currently have no standard Java SE class to representthe concept of a date without a time, a time without a date or aduration. The result of these missing features has been widespreadabuse of the facilities which are provided, such as using the Date orCalendar class with the time set to midnight to represent a datewithout a time. Such an approach is very error-prone - there arecertain time zones where midnight doesn't exist once a year due tothe daylight saving time cutover. JSR-310 tackles this by providinga comprehensive set of date and time classes suitable for Java SEtoday. The specification includes: Date and Time; Date without Time;Time without Date; Offset from UTC; Time Zone; Durations; Periods;Formatting and Parsing; A selection of calendar systems...Design Goals for JSR-310: (1) Immutable - The JSR-310 classes shouldbe immutable wherever possible. Experience over time has shown thatAPIs at this level should consist of simple immutable objects. Theseare simple to use, can be easily shared, are inherently thread-safe,friendly to the garbage collector and tend to have fewer bugs due tothe limited state-space. (2) Fluent API - The API strives to be fluentwithin the standard patterns of Java SE. A fluent API has methodsthat are easy to read and understand, specifically when chainedtogether. The key goal here is to simplify the use and enhance thereadability of the API. (3) Clear, explicit and expected - Eachmethod in the API should be well-defined and clear in what it does.This isn't just a question of good javadoc, but also of ensuring thatthe method can be called in isolation successfully and meaningfully.(4) Extensible - The API should be extensible in well defined waysby application developers, not just JSR authors. The reasoning issimple - there are just far too many weird and wonderful ways tomanipulate time. A JSR cannot capture all of them, but an extensibleJSR design can allow for them to be added as required by applicationdevelopers or open source projects..."<a style="COLOR: rgb(51,51,51)" href="http://wiki.java.net/bin/view/Projects/DateTimeEDR1" target="_blank">http://wiki.java.net/bin/view/Projects/DateTimeEDR1</a>See also the InfoQueue article by Alex Blewitt and Charles Humble: <a style="COLOR: rgb(51,51,51)" href="http://www.infoq.com/news/2010/03/jsr-310" target="_blank">http://www.infoq.com/news/2010/03/jsr-310</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-4645352236406740022010-03-18T07:30:00.001-07:002010-03-18T07:30:33.600-07:00W3C XML Security Working Group Releases Four Working Drafts for ReviewMembers of the W3C XML Security Working Group have published four WorkingDraft specifications for public review. This WG, along with the W3C WebSecurity Context Working Group, is part of the W3C XML Security Activity,and is chartered to to take the next step in developing the XML securityspecifications."XML Encryption Syntax and Processing Version 1.1" specifies "a processfor encrypting data and representing the result in XML. The data may bein a variety of formats, including octet streams and other unstructureddata, or structure data formats such as XML documents, an XML element,or XML element content. The result of encrypting data is an XML Encryptionelement which contains or references the cipher data""XML Security Algorithm Cross-Reference" is a W3C Note which "summarizesXML Security algorithm URI identifiers and the specifications associatedwith them. The various XML Security specifications have defined a numberof algorithms of various types, while allowing and expecting additionalalgorithms to be defined later. Over time, these identifiers have beendefined in a number of different specifications, including XML Signature,XML Encryption, RFCs and elsewhere. This makes it difficult for usersof the XML Security specifications to know whether and where a URI foran algorithm of interest has been defined, and can lead to the use ofincorrect URIs. The purpose of this Note is to collect the various knownURIs at the time of its publication and indicate the specifications inwhich they are defined in order to avoid confusion and errors... The noteindicates explicitly whether an algorithm is mandatory or recommended inother specifications. If nothing is said, then readers should assumethat support for the algorithms given is optional."The "XML Security Generic Hybrid Ciphers" Working Draft "augments XMLEncryption Version 1.1 by defining algorithms, XML types and elementsnecessary to enable use of generic hybrid ciphers in XML Securityapplications. Generic hybrid ciphers allow for a consistent treatmentof asymmetric ciphers when encrypting data and consist of a keyencapsulation algorithm with associated parameters and a dataencapsulation algorithm with associated parameters." Fourth, "XMLSecurity RELAX NG Schemas" serves to publish RELAX NG schemas for XMLSecurity specifications, including XML Signature 1.1 and XML SignatureProperties.<a style="COLOR: rgb(51,51,51)" href="http://www.w3.org/News/2010#entry-8749" target="_blank">http://www.w3.org/News/2010#entry-8749</a>See also the W3C Web Security Context WG and XML Security WG: <a style="COLOR: rgb(51,51,51)" href="http://www.w3.org/Security/Activity" target="_blank">http://www.w3.org/Security/Activity</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0tag:blogger.com,1999:blog-6063171099764078988.post-4882873580851906922010-03-17T09:30:00.004-07:002010-03-17T09:31:18.836-07:00Document Format Standards and PatentsAlex Brown, Blog<br />This post is part of an ongoing series. It expands on Item 9 of 'ReformingStandardisation in JTC 1', which proposed Ten Recommendations for Reform,and Item 9 was "Clarify intellectual property policies: InternationalStandards must have clearly stated IP policies, and avoid unacceptablepatent encumbrances."<br />Historically, patents have been a fraught topic with an uneasy co-existencewith standards. Perhaps (within JTC 1) one of the most notorious recentexamples surrounded the JPEG Standard and, in part prompted by suchproblems there are certainly many people of good will wanting bettermanagement of IP in standards. Judging by some recent development indocument format standardisation, it seems probable that this will be thearea where progress can next be made...<br />The Myth of Unencumbered Technology: Given the situation we are evidentlyin, it is clear that no technology is safe. The brazen claims ofcorporations, the lack of diligence by the US Patent Office, and thecapriciousness of courts means that any technology, at any time, maysuddenly become patent encumbered. Technical people - being logical andreasonable - often make the mistake of thinking the system is bound bylogic and reason; they assume that because they can see 'obvious' priorart, then it will apply; however as the case of the i4i patent vividlyillustrates, this is simply not so.<br />While the "broken stack" of patents is beyond repair by any singlestandards body, at the very least the correct application of the rulescan make the situation for users of document format standards moretransparent and certain. In the interests of making progess in thisdirection, it seems a number of points need addressing now. (1) Usersshould be aware that the various covenants and promises being pointed-toby the US vendors need not be relevant to them as regards standards use.Done properly, International Standardization can give a clearer andstronger guarantee of license availability -- without the caveats,interpretable points and exit strategies these vendors' documentsinvariably have. (2) In particular it should be of concern to NBs thatthere is no entry in JTC 1's patent database for OOXML (there is forDIS 29500, its precursor text, a ZRAND promise from Microsoft); thereis no entry whatsoever for ODF... (3) In the case of the i4i patent,one implementer has already commented that implementing CustomXML inits entirety may run the risk of infringement -- and this is probably,after all, why Microsoft patched Word in the field to remove someaspects of its CustomXML support).... (4) When declaring their patentsto JTC 1, patent holders are given an option whether to make a generaldeclaration about the patents that apply to a standard, or to make aparticular declaration about each and every itemized patent whichapplies. I believe NBs should be insisting that patent holder enumerateprecisely the patents they hold which they claim apply.. There isobviously much to do, and I am hoping that at the forthcoming SC 34meetings in Stockholm this work can begin...<br /><a href="http://www.adjb.net/post/Document-Format-Standards-and-Patents.aspx">http://www.adjb.net/post/Document-Format-Standards-and-Patents.aspx</a>See also article Part 1: <a href="http://www.adjb.net/post/Reforming-Standardisation-in-JTC-1-e28093-Part-1.aspx">http://www.adjb.net/post/Reforming-Standardisation-in-JTC-1-e28093-Part-1.aspx</a>Sajjadhttp://www.blogger.com/profile/02956747095576348209noreply@blogger.com0