Search This Blog

Saturday, January 23, 2010

Windows Domain to Amazon EC2 Single Sign-On Access Solutions

David Chappell, the Principal of Chappell & Associates, US, has writtena whitepaper proposing several solutions for Single Sign-on (SSO) accessto applications deployed on Amazon EC2 from a Windows domain. InfoQexplored these solutions to understand what the benefits and tradeoffseach one presented.
The paper is: "Connecting to the Cloud: Providing Single Sign-On toAmazon EC2 Applications from an On-Premises Windows Domain." Excerpt:"Users hate having multiple passwords. Help desks hate multiple passwordstoo, since users forget them. Even IT operations people hate them,because managing and synchronizing multiple passwords is expensive andproblematic. Providing single sign-on (SSO) lets users log in just once,then access many applications without needing to enter more passwords.It can also make organizations more secure by reducing the number ofpasswords that must be maintained. And for vendors of Software as aService (SaaS), SSO can make their applications more attractive by lettingusers access them with less effort...
With the emergence of cloud platforms, new SSO challenges have appeared.For example, Amazon Web Services (AWS) provides the Amazon ElasticCompute Cloud (Amazon EC2). This technology lets a customer create AmazonMachine Images (AMIs) containing an operating system, applications, andmore. The customer can then launch instances of those AMIs (virtualmachines) to run applications on the Amazon cloud. Similarly, Microsoftprovides Windows Azure, which lets customers run Windows applications onMicrosoft's cloud. When an application running on a cloud platform needsto be accessed by a user in an on-premises Windows domain, giving thatuser single sign-on makes sense. Fortunately, there are several waysto do this..."
"SSO is an important feature to have when the number of on-premises andInternet accounts created by users grow to large numbers, making thetask of administering them increasingly difficult. This will likelyresult in more requests to software vendors for SSO support/solutionssince these make the users' lives simpler and reduce administration costs..."
http://www.infoq.com/news/2010/01/Windows-EC2-Single-Sign-OnSee also the white paper: http://download.microsoft.com/download/6/C/2/6C2DBA25-C4D3-474B-8977-E7D296FBFE71/EC2-Windows%20SSO%20v1%200--Chappell.pdf

W3C Invites Implementations of W3C XSD Component Designators

Members of the W3C XML Schema Working Group now invite implementationof the Candidate Recommendation specification "W3C XML Schema DefinitionLanguage (XSD): Component Designators." The Candidate Recommendationreview period for this document extends until 1-March-2010. Comments onthis document should be made in W3C's public installation of Bugzilla,specifying 'XML Schema' as the product.
A test suite is under development that identifies the set of canonicalschema component paths that should be generated for particular testschemas, and that relates certain non-canonical component paths to thecorresponding canonical schema component paths. The W3C XML SchemaWorking Group has agreed on the following specific CR exit criteria:(1) A test suite is available which provides cases for each axis andcomponent type, both for the XML Schema 1.0 component model and the XMLSchema 1.1 component model. (2) Generation or interpretation of canonicalschema component paths have been implemented successfully by at leasttwo independent implementations. (3) Generation or interpretation ofeach axis and component for non-canonical schema component paths hasbeen implemented successfully by at least two independent implementations.(4) The Working Group has responded formally to all issues raisedagainst this document during the Candidate Recommendation period.
"XML Schema: Component Designators" defines a scheme for identifying XMLSchema components as specified by 'XML Schema Part 1: Structures' and'XML Schema Part 2: Datatypes'. Part 1 of the W3C XML Schema DefinitionLanguage (XSD) recommendation defines these schema components, whereSection 2.2 lays out the inventory of schema components into three classes:(a) Primary components: simple and complex type definitions, attributedeclarations, and element declarations (b) Secondary components: attributeand model group definitions, identity-constraint definitions, and notationdeclarations (c) "Helper" components: annotations, model groups, particles,wildcards, and attribute uses In addition there is a master schemacomponent, the schema component representing the schema as a whole..."
http://www.w3.org/TR/2010/CR-xmlschema-ref-20100119/See also the W3C XML Schema Working Group: http://www.w3.org/XML/Schema

Principles for Standardized REST Authentication

"Working with the programming APIs for cloud providers and SaaS vendorshas taught me two things: (i) There are very few truly RESTfulprogramming APIs. (ii) Everyone feels the need to write a customauthentication protocol. I've programmed against more web servicesinterfaces than I can remember. In the last month alone, I've writtento web services APIs for Aria, AWS, enStratus, GoGrid, the RackspaceCloud, VMOps, Xero, and Zendesk. Each one requires a differentauthentication mechanism. Two of them (Aria and AWS) defy all logic andrequire different authentication mechanisms for different parts of theirrespective APIs. Let's end this here and now...
Here's a set of standards that I think should be in place for any RESTauthentication scheme. Here's the summary: (1) All REST API calls musttake place over HTTPS with a certificate signed by a trusted CA. Allclients must validate the certificate before interacting with the server.(2) All REST API calls should occur through dedicated API keys consistingof an identifying component and a shared, private secret. Systems mustallow a given customer to have multiple active API keys and de-activateindividual keys easily. (3) All REST queries must be authenticated bysigning the query parameters sorted in lower-case, alphabetical orderusing the private credential as the signing token. Signing should occurbefore URL encoding the query string...
This is a battle I know I am going to lose. After all, people stillcan't settle on being truly RESTful (just look at the AWS EC2 monstrosityof an API). Authentication is almost certainly a secondary consideration.If you are reading this post and just don't want to listen to mysuggestions, I plead with you to follow someone else's example and notroll your own authentication scheme..."
Dilip Krishnan (Blog 'RESTful API Authentication Schemes') provides asummary and encourages readers to weigh in on the recommendations.

http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.htmlSee also Dilip Krishnan: http://www.infoq.com/news/2010/01/rest-api-authentication-schemes

Call for Participation: W3C Workshop on the Next Steps for RDF

W3C is organizing a Workshop on the Next Steps for RDF around June,2010 as described in the Call for Participation. The deadline forposition papers is 29-March-2010. Each participant in the workshop mustbe associated with a position paper. W3C membership is not requiredto participate in the Workshop.

The goal of the workshop is to gather feedback from the Web communityon whether and, if yes, in which direction RDF should evolve. One ofthe main issues the Workshop should help deciding is whether it istimely for W3C to start a new RDF Working Group to define and standardizea next version of RDF.

While a new version of RDF may include changes in terms of features,semantics, and serialization syntax(es), backward compatibility is ofa paramount importance. Indeed, RDF has been deployed by tools andapplications, and the last few years have seen a significant uptake ofSemantic Web technologies and publication of billions of triples stemmingfrom public databases (see, eg, the Linked Open Data community). Itwould be, therefore, detrimental to this evolution if RDF was seen asunstable and if the validity of current application would be jeopardizedby a future evolution. As a consequence, with any changes of RDF,backward compatibility requirements should be formalized..."

Background: "The Resource Description Framework (RDF), including thegeneral concepts, its semantics, and an XML Serialization (RDF/XML),have been published in 2004. Since then, RDF has become the corearchitectural block of the Semantic Web, with a significant deploymentin terms of tools and applications. As a result of the R&D activitiesand the publication of newer standards like SPARQL, OWL, POWDER, orSKOS, but also due to the large scale deployment and applications, anumber of issues regarding RDF came to the fore. Some of those arerelated to features that are not present in the current version of RDFbut which became necessary in practice (e.g., the concept of NamedGraphs). Others result from the difficulties caused by the designdecisions taken in the course of defining the 2004 version of RDF (e.g.,restrictions whereby literals cannot appear as subjects). Definitionof newer standards have also revealed difficulties when applying thesemantics of RDF (e.g., the exact semantics of blank nodes for RIF andOWL, or the missing connection between URI-s and the RDF resourcesnamed by those URI-s for POWDER). New serializations formats (e.g., Turtle)have gained a significant support by the community, while thecomplications in RDF/XML syntax have created some difficulties in practiceas well as in the acceptance of RDF by a larger Web community. Finally,at present there is no standard programming API to manage RDF data;the need may arise to define such a standard either in a general,programming language independent way or for some of the importantlanguages (Javascript/ECMAscript, Java, Python, etc)..." More Info

Earth Observation Application Profile for OGC Catalogue Services

The Open Geospatial Consortium (OGC) has announced adoption andavailability of the "OGC EarthObservation (EO) Application Profile for the OGC Catalogue Services -- (CSW) Specification" Version 2.0.2. The EO-CSW standard will benefit a wide range of stakeholders involved in the provision and use of datagenerated by satellite-borne and aerial radar, optical and atmosphericsensors.

The EO-CSW standard describes a set of interfaces, bindings andencodings that can be implemented in catalog servers that dataproviders will use to publish collections of descriptive information(metadata) about Earth Observation data and services. Developers canalso implement this standard as part of Web clients that enable datausers and their applications to very efficiently search and exploitthese collections of Earth Observation data and services.

This specification is part of a set that describe services for managingEarth Observation (EO) data products. The services include collectionlevel, and product level catalogues, online-ordering for existing andfuture products, on-line access etc. These services are put intocontext in an overall document 'Best Practices for EO Products'. Theservices proposed are intended to support the identification (EO) dataproducts from previously identified data collections. In other words,the search and present of metadata from catalogues of EO data products.

The intent of the profile is to describe a cost effective interfacethat can be supported by many data providers (satellite operators,data distributors...), most of whom have existing (and relatively complex)facilities for the management of these data. The strategy is to reuseas far as possible the SOAP binding defined in the ISO ApplicationProfile, except the schemas defining the information model. To achievea cost effective interface, some choices will be limited by textualcomments. EO data product collections are usually structured to describedata products derived from a single sensor onboard a satellite or seriesof satellites. Products from different classes of sensors usually requirespecific product metadata. The following classes of products have beenidentified so far: radar, optical, atmospheric. The proposed approachis to identify a common set of elements grouped in a common (HMA)schema and extend this common schema to add the sensors specific metadata. More Info

W3C First Public Working Draft for Contacts API Specification

Members of the W3C Device APIs and Policy Working Group have publisheda First Public Working Draft for "The Contacts API" specification. Itdefines an API that provides access to a user's unified address book.
The API has been designed to meet requirements and use cases specifiedin the draft. Use cases: (1) Upload a set of contact details to a user'ssocial network; (2) Download a set of contact details from a user'ssocial network; (3) A user would like to keep their work address bookand personal address book seperate; (4) A user maintains a singleunified address book but would like to maintain groups of contactswithin that address book; (5) Use a web interface to manage contactdetails on both the user's device and the web; (6) A user would like toexport contacts from the one address book store and import them toanother address book store; (7) A user would like to be notified whenfriends have a birthday coming up; (8) A user would like his/hercontacts to update their own contact details via a mediating WebApplication and sync any changes to their current address book.

Details: "The Contacts API defines a high-level interface to provideaccess to the user's unified contact information, such as names,addresses and other contact information. The API itself is agnostic ofany underlying address book sources and data formats... The Contactsinterface exposes a database collecting contacts information, suchthat they may be created, found, read, updated, and deleted. Multipleaddress books, taken from different sources, can be represented withinthis unified address book interface...

The programmatic styles of the Contacts API and Geolocation API are verysimilar and because they both have the the same implied user experiencewithin the same implied User Agent the general security and privacyconsiderations of both APIs should remain common. The ability to alignthe security and privacy considerations of the Geolocation API withDAP APIs is important for the potential future benefit of making anysecurity and privacy mechanisms developed within the DAP WG applicableto the Geolocation API at some point in its own ongoing development...A conforming implementation of this specification must provide amechanism that protects the user's privacy and this mechanism shouldensure that no contact information is creatable, retrivable, updateableor removable without the user's express permission... More Info

Friday, January 22, 2010

Microsoft Urges Laws to Boost Trust in the Cloud

Microsoft is so concerned about the future of cloud computing thatit's urging the government to step in. In a speech Wednesday[2010-01-20], Microsoft general counsel and senior vice presidentBrad Smith called on government and business to shore up confidencein cloud computing by tackling issues of privacy and security -- twomajor concerns that have been voiced about the cloud...
A Microsoft survey found that 58 percent of the public and 86 percentof business leaders are excited about the possibilities of cloudcomputing. But more than 90 percent of them are worried about security,availability, and privacy of their data as it rests in the cloud.Microsoft said it also found that most of the people surveyed believethe U.S. should set up laws and policies to govern cloud computing...
During his speech, Smith proposed that Washington create a CloudComputing Advancement Act that would protect consumers and give thegovernment tools to handle issues such as data privacy and security.He added that an international dialogue is crucial in addressing datasecurity so that information is protected no matter where it resides.In proposing legislation, Microsoft is looking to the government toenact specific measures, including to: (1) Beef up the ElectronicCommunications Privacy Act to more clearly define and protect theprivacy of consumers and businesses; (2) Update the Computer Fraud andAbuse Act so that law enforcement has the resources it needs to combathackers; (3) Establish truth-in-cloud-computing principles so thatconsumers and businesses know how their information will be accessedand secured; (4) Set up a framework so that differences in regulationson cloud computing among various countries can be better clarifiedand reconciled... More Info

IETF Internet Draft on Web Linking Considered as a Proposed Standard

The Internet Engineering Steering Group (IESG) announced receipt of aa request to consider version -07 of the "Web Linking" specificationas an IETF Proposed Standard. The IESG plans to make a decision in thenext few weeks, and solicits final comments on this action through2010-02-17.
This document specifies relation types for Web links, and defines aregistry for them. It also defines the use of such links in HTTP headerswith the Link header-field.
Background: "A means of indicating the relationships between resourceson the Web, as well as indicating the type of those relationships, hasbeen available for some time in HTML, and more recently in Atom (IETFRFC 4287). These mechanisms, although conceptually similar, are separatelyspecified. However, links between resources need not be format-specific;it can be useful to have typed links that are independent of theirserialisation, especially when a resource has representations inmultiple formats. To this end, this document defines a framework fortyped links that isn't specific to a particular serialisation orapplication. It does so by re-defining the link relation registryestablished by Atom to have a broader domain, and adding to it therelations that are defined by HTML.
Appendix E (Document History) in this 25-page document lists some sixteen(16) changes since the publication of version -06 (20 pages): Allowedmultiple spaces between relation types; Relaxed requirements forregistered relations; Removed Defining New Link Serialisations appendix;Added Field registry; Added registry XML format; Changed registrationprocedure to use mailing list(s), giving the Designated Experts moreresponsibility for the smooth running of the registry; Loosened prohibitionagainst media-specific relation types to SHOULD NOT; Disallowedregistration of media-specific relation types -- can still be used asextension types; Clarified that parsers are responsible for resolvingrelative URIs; Fixed ABNF for extended-initial-value; Fixed 'title*'parameter quoting in example; Added notes for registered relations thatlack a reference; Added 'hreflang' parameter; Clarified status of 'rev';Removed advice to use '@profile' in HTML4; Clarified what multiple'*title' and 'hreflang' attributes mean... More Info

Heartland Moves To Encrypted Payment System

"Responding to its widely reported and massive data breach that tookplace a year ago, Heartland Payment Systems will be moving to anend-to-end encryption system for payment transactions, according toChairman and CEO Robert Carr: 'We're using encryption on the front endto keep card numbers out of our merchants' systems, and to also haveall the card numbers coming through our network be encrypted throughout,except at the point of decryption'. In January 2009, Heartland PaymentSystems reported that it found that intruders had penetrated its systemsand planted software to harvest card numbers, using SQL injectionattacks to plant programs inside the network that would sniff the cardnumbers.
Heartland, which handles more than 4 billion transactions annually formore than 250,000 merchants, will be using Thales nShield Connecthardware security module along with Voltage Security's SecureDataencryption software as the basis of this capability... This new systeminvolves installing a tamper-resistant security module (TRSM) at thepoint-of-sale system. When a card is swiped, the TRSM encrypts thecard's number with a public key using Identity Based Encryption, andit is sent to the Heartland gateway. This new system will offermerchants the capability to encrypt cards so the merchant themselveswill not house the card numbers on their systems at all, explainedTerrence Spies, the chief technology officer for Voltage Security. Mostmerchant payment-processing systems encrypt the PIN number or securitynumbers of cards. The card numbers themselves aren't typically encryptedat the cash registers, also called point-of-sale systems.
Spies: "The HSM controls the process of decrypting the private key...This system will use a technique called format-preserving encryption(FPE), which means the encrypted numbers will be the same length asthe original card numbers, allowing the encrypted numbers to be usedin other database systems as identifiers, rather than the originalnumbers. Heartland piloted a few test systems with merchants last yearand now plans to start offering the service to all its customers.Because moving to the card encryption will require purchasing newhardware for the register, Heartland will offer the end-to-end encryptionas an opt-in... Carr said that if the merchant implements the systemcorrectly and it then suffers a breach involving the leakage of cardnumbers, then Heartland will assume the liability for the breach..." More Info

Open Source Clouds on the Rise

"Cloud computing has the potential to transform how government agenciestap into IT services, and open source is an underlying technology inseveral of the early government clouds that have been developed... Achallenge for government agencies is determining how one cloud can workwith other clouds and IT systems to provide the same secure, robustinfrastructure that exists with traditional IT environments. Here'swhere agencies may turn to open source, which has the advantage of'openness,' providing flexibility, interoperability, and the potentialfor customization without the risks of vendor lock-in...
Components of the open source software stack that are being used tobuild and manage clouds include the Linux operating system, Eucalyptus(incorporates the Apache Axis2 Web services engine, Mule enterpriseservice bus, Rampart security, and Libvirt virtualization), Datacloud,Nimbus' EC2 interface (lets organizations access public cloudinfrastructures), virtual machine hypervisors, and Zend Technologies'Simple API (can be used for calling a cloud service from multiple clouds;GoGrid, IBM, Microsoft, Nirvanix Storage Delivery Network, and RackspaceFiles all support it).
In an example of how the pieces fit together, NASA's Ames Research Centeris using Eucalyptus, the Lustre file system, Django Web applicationframework, and SOLR indexing and search engine in its Nebula cloud.Standards are still needed to ensure the viability of open source clouds,and reliability and security have to be proven. With those concerns onthe table, the gradual adoption of cloud computing, along with opensource, is the path we're on. Open source can help minimize up-frontinvestment, give agencies control over their clouds, and tap into sharedresources..." More Info

W3C First Public Working Draft for Selectors API Level 2

Members of the W3C Web Applications Working Group have published a FirstPublic Working Draft for the specification "Selectors API Level 2."Selectors, which are widely used in Cascading Style Sheets (CSS), arepatterns that match against elements in a tree structure...
The Selectors API specification defines methods for retrieving Elementnodes from the DOM by matching against a group of selectors, and fortesting if a given element matches a particular selector. It is oftendesirable to perform DOM operations on a specific set of elements ina document. These methods simplify the process of acquiring and testingspecific elements, especially compared with the more verbose techniquesdefined and used in the past...
Implementors should be aware that this specification is not stable.Implementors who are not taking part in the discussions are likely tofind the specification changing out from under them in incompatible ways.Vendors interested in implementing the specification before iteventually reaches the Candidate Recommendation stage should join theappropriate mailing lists and take part in the discussions..." More Info

OASIS Public Review Draft for Production Planning and Scheduling (PPS)

Members of the OASIS Production Planning and Scheduling (PPS) TechnicalCommittee have released an approved set of PPS specifications forpublic review through March 12, 2010. This TC was chartered in 2003to "develop common object models and corresponding XML schemas forproduction planning and scheduling software, which can communicate witheach other in order to establish collaborative planning and schedulingon intra and/or inter enterprises in manufacturing industries."
"OASIS PPS (Production Planning and Scheduling) specifications deal withproblems of decision-making in all manufacturing companies who want tohave a sophisticated information system for production planning andscheduling. PPS specifications provide XML schema and communicationprotocols for information exchange among manufacturing applicationprograms in the web-services environment...
"PPS (Production Planning and Scheduling) Part 1: Core Elements, Version1.0" focuses on an information model of core elements which can be usedas ontology in the production planning and scheduling domain. Since theelements have been designed without particular contexts in planning andscheduling, they can be used in any specific type of messages as abuilding block depending on the context of application programs.
"PPS (Production Planning and Scheduling) Part 2: Transaction Messages,Version 1.0" focuses on transaction messages that represent domaininformation sending or receiving by application programs in accordancewith the context of the communication, as well as transaction rules forcontexts such as pushing and pulling of the information required..."PPS (Production Planning and Scheduling) Part 3: Profile Specifications,Version 1.0" focuses on profiles of application programs that may exchangethe messages. Application profile and implementation profile are defined.Implementation profile shows capability of application programs in termsof services for message exchange, selecting from all exchange itemsdefined in the application profile. The profile can be used fordefinition of a minimum level of implementation of application programswho are involved in a community of data exchange..." More Info

OAuth Web Resource Authorization Profiles

IETF has published an Internet Draft for "OAuth Web ResourceAuthorization Profiles." The OAuth Web Resource Authorization Profiles(OAuth WRAP) allow a server hosting a Protected Resource to delegateauthorization to one or more authorities. An application (Client)accesses the Protected Resource by presenting a short lived, opaque,bearer token (Access Token) obtained from an authority (AuthorizationServer). There are Profiles for how a Client may obtain an Access Tokenwhen acting autonomously or on behalf of a User.
Background: "As the internet has evolved, there is a growing trend fora variety of applications (Clients) to access resources through an APIover HTTP or other protocols. Often these resources require authorizationfor access and are Protected Resources. The systems that are trustedto make authorization decisions may be independent from the ProtectedResources for scale and security reasons. The OAuth Web ResourceAuthorization Profiles (OAuth WRAP) enable a Protected Resource todelegate the authorization to access a Protected Resource to one ormore trusted authorities.
Clients that wish to access a Protected Resource first obtainauthorization from a trusted authority (Authorization Server). Differentcredentials and profiles can be used to obtain this authorization, butonce authorized, the Client is provided an Access Token, and possiblea Refresh Token to obtain new Access Tokens. The Authorization Servertypically includes authorization information in the Access Token anddigitally signs the Access Token. Protected Resource can verify thatan Access Token received from a Client was issued by a trustedAuthorization Server and is valid. The Protected Resource can thenexamine the contents of the Access Token to determine the authorizationthat has been granted to the Client.
The Access Token is opaque to the Client, and can be any format agreedto between the Authorization Server and the Protected Resource enablingexisting systems to reuse suitable tokens, or use a standard tokenformat such as a Simple Web Token or JSON Web Token. Since the AccessToken provides the Client authorization to the Protected Resource forthe life of the Access Token, the Authorization Server should issueAccess Tokens that expire within an appropriate time. When an AccessToken expires, the Client requests a new Access Token from theAuthorization Server, which once again computes the Client'sauthorization, and issues a new Access Token... Two Profiles arerecommended for scenarios involving a Client acting autonomously:(1) Client Account and Password Profile, where the Client is provisionedwith an account name and corresponding password by the AuthorizationServer; (2) Assertion Profile, which enables a Client with a SAML orother assertion recognized by the Authorization Server...
More Info