"The Windows Azure platform is an Internet-scale cloud computing
services platform hosted in Microsoft data centers. Windows tools
provide functionality to build solutions that include a cloud services
operating system and a set of developer services. The key parts of the
Windows Azure platform are: Windows Azure -- application container,
Microsoft SQL Azure, and Windows Azure platform AppFabric
The Windows Azure platform is part of the Microsoft cloud, which
consists of multiple categories of services: (1) Cloud-based
applications: These are services that are always available and highly
scalable. They run in the Microsoft cloud that consumers can directly
utilize. Examples include Bing, Windows Live Hotmail, Office.
(2) Software services: These services are hosted instances of
Microsoft's enterprise server products that consumers can use directly.
Examples include Exchange Online, SharePoint Online, Office
Communications Online, etc. (3) Platform services: This is where the
Windows Azure platform itself is positioned. It serves as an application
platform public cloud that developers can use to deploy next-generation,
Internet-scale, and always available solutions. (4) Infrastructure
services: There is a limited set of elements of the Windows Azure
platform that can support cloud-based infrastructure resources.
SQL Azure is a cloud-based relational database service built on SQL
Server technologies that exposes a fault-tolerant, scalable, and
multi-tenant database service. SQL Azure does not exist as hosted
instances of SQL Server. It also uses a cloud fabric layer to abstract
and encapsulate the underlying technologies required for provisioning,
server administration, patching, health monitoring, and lifecycle
management.
Summary of Key Points: (1) The Windows Azure platform is primarily a
PaaS deployed in a public cloud managed by Microsoft. (2) Windows Azure
platform provides a distinct set of capabilities suitable for building
scalable and reliable cloud-based services. (3) The overall Windows
Azure platform further encompasses SQL Azure and Windows Azure platform
AppFabric." More Info See also XML in Clinical Research and Healthcare Industries:
Search This Blog
Friday, August 13, 2010
Computers in Patient Care: The Promise and the Challenge
"Why is it that in terms of automating medical information, we are
still attempting to implement concepts that are decades old? With all
of the computerization of so many aspects of our daily lives, medical
informatics has had limited impact on day-to-day patient care. We have
witnessed slow progress in using technology to gather, process, and
disseminate patient information, to guide medical practitioners in
their provision of care and to couple them to appropriate medical
information for their patients' care...
The first challenge in applying medical informatics to the daily
practice of care is to decide how computerization can help patient care
and to determine the necessary steps to achieve that goal. Several
other early attempts were made to apply computerization to health
care. Most were mainframe-based, driving 'dumb' terminals. Many dealt
only with the low-hanging fruit of patient order entry and results
reporting, with little or no additional clinical data entry. Also,
many systems did not attempt to interface with the information
originator (e.g., physician) but rather delegated the system use to
a hospital ward clerk or nurse, thereby negating the possibility of
providing medical guidance to the physician, such as a warning about
the dangers of using a specific drug.
We have made significant technological advances that solve many of
these early shortcomings. Availability of mass storage is no longer a
significant issue. Starting with a 7-MB-per-freezer-size-disk drive
(which was not very reliable), we now have enterprise storage systems
providing extremely large amounts of storage for less than $1 per
gigabyte, and they don't take up an entire room. This advance in
storage has been accompanied by a concomitant series of advances in
file structures, database design, and database maintenance utilities,
greatly simplifying and accelerating data access and maintenance.
[But] if we truly want to develop an information utility for
health-care delivery in an acute care setting (such as an intensive
care unit or emergency department), we need to strive for overall
system reliability at least on the order of our electric power grid...
One significant issue is the balkanization of medical computerization.
Historically, there has been little appreciation of the need for an
overall system. Instead we have a proliferation of systems that do
not integrate well with each other. For example, a patient who is
cared for in my emergency department may have his/her data spread
across nine different systems during a single visit, with varying
degrees of integration and communication among these systems: EDIS
(emergency department information system), prehospital care (ambulance)
documentation system, the hospital ADT (admission/discharge/transfer)
system, computerized clinical laboratory system, electronic data
management (medical records) imaging system, hospital pharmacy system,
vital-signs monitoring system, hospital radiology ordering system,
and PACS system...." More Info See also XML in Clinical Research and Healthcare Industries:
still attempting to implement concepts that are decades old? With all
of the computerization of so many aspects of our daily lives, medical
informatics has had limited impact on day-to-day patient care. We have
witnessed slow progress in using technology to gather, process, and
disseminate patient information, to guide medical practitioners in
their provision of care and to couple them to appropriate medical
information for their patients' care...
The first challenge in applying medical informatics to the daily
practice of care is to decide how computerization can help patient care
and to determine the necessary steps to achieve that goal. Several
other early attempts were made to apply computerization to health
care. Most were mainframe-based, driving 'dumb' terminals. Many dealt
only with the low-hanging fruit of patient order entry and results
reporting, with little or no additional clinical data entry. Also,
many systems did not attempt to interface with the information
originator (e.g., physician) but rather delegated the system use to
a hospital ward clerk or nurse, thereby negating the possibility of
providing medical guidance to the physician, such as a warning about
the dangers of using a specific drug.
We have made significant technological advances that solve many of
these early shortcomings. Availability of mass storage is no longer a
significant issue. Starting with a 7-MB-per-freezer-size-disk drive
(which was not very reliable), we now have enterprise storage systems
providing extremely large amounts of storage for less than $1 per
gigabyte, and they don't take up an entire room. This advance in
storage has been accompanied by a concomitant series of advances in
file structures, database design, and database maintenance utilities,
greatly simplifying and accelerating data access and maintenance.
[But] if we truly want to develop an information utility for
health-care delivery in an acute care setting (such as an intensive
care unit or emergency department), we need to strive for overall
system reliability at least on the order of our electric power grid...
One significant issue is the balkanization of medical computerization.
Historically, there has been little appreciation of the need for an
overall system. Instead we have a proliferation of systems that do
not integrate well with each other. For example, a patient who is
cared for in my emergency department may have his/her data spread
across nine different systems during a single visit, with varying
degrees of integration and communication among these systems: EDIS
(emergency department information system), prehospital care (ambulance)
documentation system, the hospital ADT (admission/discharge/transfer)
system, computerized clinical laboratory system, electronic data
management (medical records) imaging system, hospital pharmacy system,
vital-signs monitoring system, hospital radiology ordering system,
and PACS system...." More Info See also XML in Clinical Research and Healthcare Industries:
IETF Approves Symmetric Key Package Content Type Specification
The Internet Engineering Steering Group (IESG) has announced approval
of the "Symmetric Key Package Content Type" Specification as an IETF
Proposed Standard. Hannes Tschofenig is the document shepherd for this
document, and Tim Polk is the IETF Responsible Area Director. The
specification was produced by members of the IETF Provisioning of
Symmetric Keys (KEYPROV) Working Group.
"This document provides the ASN.1 variant of the Portable Symmetric Key
Container (PSKC), which is defined using XML in the I-D 'Portable
Symmetric Key Container (PSKC)' The symmetric key container defines a
transport independent mechanism for one or more symmetric keys as well
as any associated attributes. The container by itself is insecure; it
can be secured using either the Dynamic Symmetric Key Provisioning
Protocol (DSKPP) or a CMS protecting content types, per RFC 5652. In
addition to the key container, this document also defines ASN.1 version
of the XML elements and attributes defined in PSKC.
Working Group Summary: The WG agreed that this container would be the
optional container, but there was a contingent (both in the WG and in
the IEEE) that wanted the ASN.1 container. The format for the container
has been stable since version -02. The ASN.1 converted XML elements
and attributes were added in the last version to ensure alignment with
PSKC.
Document Quality: The text of this document is derived from the XML
elements and attributes defined in draft-ietf-keyprov-pskc. As such,
this document represents the ASN.1 based version of the XML-based
counterpart. More Info See also the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group:
of the "Symmetric Key Package Content Type" Specification as an IETF
Proposed Standard. Hannes Tschofenig is the document shepherd for this
document, and Tim Polk is the IETF Responsible Area Director. The
specification was produced by members of the IETF Provisioning of
Symmetric Keys (KEYPROV) Working Group.
"This document provides the ASN.1 variant of the Portable Symmetric Key
Container (PSKC), which is defined using XML in the I-D 'Portable
Symmetric Key Container (PSKC)' The symmetric key container defines a
transport independent mechanism for one or more symmetric keys as well
as any associated attributes. The container by itself is insecure; it
can be secured using either the Dynamic Symmetric Key Provisioning
Protocol (DSKPP) or a CMS protecting content types, per RFC 5652. In
addition to the key container, this document also defines ASN.1 version
of the XML elements and attributes defined in PSKC.
Working Group Summary: The WG agreed that this container would be the
optional container, but there was a contingent (both in the WG and in
the IEEE) that wanted the ASN.1 container. The format for the container
has been stable since version -02. The ASN.1 converted XML elements
and attributes were added in the last version to ensure alignment with
PSKC.
Document Quality: The text of this document is derived from the XML
elements and attributes defined in draft-ietf-keyprov-pskc. As such,
this document represents the ASN.1 based version of the XML-based
counterpart. More Info See also the IETF Provisioning of Symmetric Keys (KEYPROV) Working Group:
Building an AtomPub Server Using WCF Data Services
OData (odata.org) builds on the HTTP-based goodness of Atom for
publishing data; AtomPub for creating, updating and deleting data;
and the Microsoft Entity Data Model (EDM) for defining the types of
data.
If you have a JavaScript client, you can get the data back directly in
JSON instead of Atom format, and if you've got something else --
including Excel, the .Microsoft NET Framework, PHP, AJAX and more --
there are client libraries for forming OData requests and consuming
OData responses.
If you're using the .NET Framework on the server side, Microsoft also
provides an easy-to-use library called WCF Data Services for exposing
.NET Framework types or databases supported by the Microsoft Entity
Framework as OData sources. This makes it easy to expose your data
over the Internet in an HTTP- and standards-based way.
[However] there are some things that you might like to do with OData
that aren't quite part of the out-of-box experience, such as integrating
OData with existing Atom- and AtomPub-based readers and writers..." More Info
publishing data; AtomPub for creating, updating and deleting data;
and the Microsoft Entity Data Model (EDM) for defining the types of
data.
If you have a JavaScript client, you can get the data back directly in
JSON instead of Atom format, and if you've got something else --
including Excel, the .Microsoft NET Framework, PHP, AJAX and more --
there are client libraries for forming OData requests and consuming
OData responses.
If you're using the .NET Framework on the server side, Microsoft also
provides an easy-to-use library called WCF Data Services for exposing
.NET Framework types or databases supported by the Microsoft Entity
Framework as OData sources. This makes it easy to expose your data
over the Internet in an HTTP- and standards-based way.
[However] there are some things that you might like to do with OData
that aren't quite part of the out-of-box experience, such as integrating
OData with existing Atom- and AtomPub-based readers and writers..." More Info
Computing Cloud Seen as Answer for Consolidated Audit Trail
"FTEN, a supplier of risk management software to bulge bracket firms on
Wall Street has proposed that the Securities and Exchange Commission
rely on real-time data stored in a nationwide cloud of computing power
and networks to create an effective audit trail of stock market activity.
FTEN provides risk management, routing, surveillance, compliance and
market data services to market participants. The firm proposed in a
letter to the SEC look to already deployed and commercially available
systems that capture order and execution data in real-time from stock
exchanges, electronic communication networks, alternative trading systems
and dark pools to start creating the trail.
The data from all markets then could be mapped back to a unified
format that would create a normalized set of data that regulators
could review in real time for signs of market disruptions or abuse...
Ted Myerson, FTEN CEO said FTEN's commercially deployed At-Trade secure
data cloud already aggregages data from 50 sources, with a wide variety
of symbol directories, unifies it into a common format and feeds it back
to private firms... FTEN says it provides real-time risk management and
surveillance on as many as 17 billion shares of stock a day in the
United States. That, it says, equates to risk calculations involving
$150 billion worth of shares a day... FTEN did not put a price tag on
what it would take the securities industry to build out a consolidated
audit trail system based on its At-Trade cloud of compute power and
online data..." More Info
Wall Street has proposed that the Securities and Exchange Commission
rely on real-time data stored in a nationwide cloud of computing power
and networks to create an effective audit trail of stock market activity.
FTEN provides risk management, routing, surveillance, compliance and
market data services to market participants. The firm proposed in a
letter to the SEC look to already deployed and commercially available
systems that capture order and execution data in real-time from stock
exchanges, electronic communication networks, alternative trading systems
and dark pools to start creating the trail.
The data from all markets then could be mapped back to a unified
format that would create a normalized set of data that regulators
could review in real time for signs of market disruptions or abuse...
Ted Myerson, FTEN CEO said FTEN's commercially deployed At-Trade secure
data cloud already aggregages data from 50 sources, with a wide variety
of symbol directories, unifies it into a common format and feeds it back
to private firms... FTEN says it provides real-time risk management and
surveillance on as many as 17 billion shares of stock a day in the
United States. That, it says, equates to risk calculations involving
$150 billion worth of shares a day... FTEN did not put a price tag on
what it would take the securities industry to build out a consolidated
audit trail system based on its At-Trade cloud of compute power and
online data..." More Info
The Arrival of HTML 5: Lots of New Features, All Eagerly Awaited
"HTML (Hyper Text Markup Language) is one of the underpinnings
technologies of the modern web with the lion's share of web users'
Internet activities founded on it. HTML now stands on the brink of
the next change -- the coming of HTML 5. At present, the Internet
already contains a handful of HTML 5 specification outlines which
partially cover HTML 5 features and conceptions. In this article, we
review the current state of HTML and describe the most significant
HTML 5 innovations.
Offline Potential: Some time ago, a new specification for client-side
database support with interesting applications was introduced. While
this feature had vast potential, it has been excluded from current
specification drafts due to insufficient interest from vendors which
use various SQL back-ends. As such, the only offline feature currently
available in HTML 5 is flexible online/offline resources management
using cache manifests. Cache manifests allow an author of a document
to specify which referenced resources must be cached in browser data
store (e.g., static images, external CSS and JavaScript files) and
which must be retrieved from a server (e.g., time-sensitive data like
stock price graphs, responses from web services invoked from within
JavaScript). The manifest also provides means for specifying fallback
offline replacements for resources which must not be cached. This
mechanism gives the ability to compose HTML documents which can be
viewed offline.
REST in Forms: REST application can be characterized by a clear
separation between clients and servers, stateless communications with
the server (no client context is stored on the server between requests)
and uniform client-server protocol that can be easily invoked from other
clients. Applied to HTTP, it encourages usage of URI for identifying
all entities and standard HTTP methods like GET (retrieve), POST (change),
PUT (add) and DELETE (remove) for entity operations. HTML 5 now fully
supports issuing PUT and DELETE requests from HTML forms without any
workarounds. This is an unobtrusive, but ideologically important
innovation which brings more elegance into web architecture and simplifies
development of HTML UI for REST services.
Communicating Documents: Now documents opened in browsers can exchange
data using messages. Such data exchange may be useful on a web page
that includes several frames with the data loaded from different origins.
Usually, a browser does not allow JavaScript code to access/manipulate
the objects of other documents opened from a different origin. This is
done to prevent cross-site scripting and other malicious and destructive
endeavors..." More Info See also HTML5 differences from HTML4:
technologies of the modern web with the lion's share of web users'
Internet activities founded on it. HTML now stands on the brink of
the next change -- the coming of HTML 5. At present, the Internet
already contains a handful of HTML 5 specification outlines which
partially cover HTML 5 features and conceptions. In this article, we
review the current state of HTML and describe the most significant
HTML 5 innovations.
Offline Potential: Some time ago, a new specification for client-side
database support with interesting applications was introduced. While
this feature had vast potential, it has been excluded from current
specification drafts due to insufficient interest from vendors which
use various SQL back-ends. As such, the only offline feature currently
available in HTML 5 is flexible online/offline resources management
using cache manifests. Cache manifests allow an author of a document
to specify which referenced resources must be cached in browser data
store (e.g., static images, external CSS and JavaScript files) and
which must be retrieved from a server (e.g., time-sensitive data like
stock price graphs, responses from web services invoked from within
JavaScript). The manifest also provides means for specifying fallback
offline replacements for resources which must not be cached. This
mechanism gives the ability to compose HTML documents which can be
viewed offline.
REST in Forms: REST application can be characterized by a clear
separation between clients and servers, stateless communications with
the server (no client context is stored on the server between requests)
and uniform client-server protocol that can be easily invoked from other
clients. Applied to HTTP, it encourages usage of URI for identifying
all entities and standard HTTP methods like GET (retrieve), POST (change),
PUT (add) and DELETE (remove) for entity operations. HTML 5 now fully
supports issuing PUT and DELETE requests from HTML forms without any
workarounds. This is an unobtrusive, but ideologically important
innovation which brings more elegance into web architecture and simplifies
development of HTML UI for REST services.
Communicating Documents: Now documents opened in browsers can exchange
data using messages. Such data exchange may be useful on a web page
that includes several frames with the data loaded from different origins.
Usually, a browser does not allow JavaScript code to access/manipulate
the objects of other documents opened from a different origin. This is
done to prevent cross-site scripting and other malicious and destructive
endeavors..." More Info See also HTML5 differences from HTML4:
Members of the W3C Device APIs and Policy Working Group have published
a First Public Working Draft for "The Messaging API". The WG was
chartered to create client-side APIs that enable the development of Web
Applications and Web Widgets that interact with devices services such
as Calendar, Contacts, Camera... This document "represents the early
consensus of the group on the scope and features of the proposed
Messaging API; in particular, the group intends to work on messages
management (move, delete, copy, etc.) in a separate specification.
Issues and editors note in the document highlight some of the points
on which the group is still working and would particularly like to
receive feedback.
The Messaging API specification defines a high-level interface to
Messaging functionality, including SMS, MMS and Email. It includes
APIs to create, send and receive messages. The specification does not
replace RFCs for Mail or SMS URLs, but includes complementary
functionality to these.
Security: The API defined in this specification can be used to create
and subscribe for incoming messages through different technologies.
Sending messages usually have a cost associated to them, especially
SMSs and MMSs. Furthermore this cost may depend on the message attributes
(e.g. destination address) or external conditions (e.g. roaming status).
Apart from billing implications, there are also privacy considerations
due to the capability to access message contents. A conforming
implementation of this specification must provide a mechanism that
protects the user's privacy and this mechanism should ensure that no
message is sent or no subscription is establisehd without the user's
express permission.
A user agent must not send messages or subscribe for incoming ones
without the express permission of the user. A user agent must acquire
permission through a user interface, unless they have prearranged
trust relationships with users, as described below. The user interface
must include the URI of the document origin, as defined in HTML 5... A
user agent may have prearranged trust relationships that do not require
such user interfaces. For example, while a Web browser will present a
user interface when a Web site request an SMS subscription, a Widget
Runtime may have a prearranged, delegated security relationship with
the user and, as such, a suitable alternative security and privacy
mechanism with which to authorize that operation...." More Infor
a First Public Working Draft for "The Messaging API". The WG was
chartered to create client-side APIs that enable the development of Web
Applications and Web Widgets that interact with devices services such
as Calendar, Contacts, Camera... This document "represents the early
consensus of the group on the scope and features of the proposed
Messaging API; in particular, the group intends to work on messages
management (move, delete, copy, etc.) in a separate specification.
Issues and editors note in the document highlight some of the points
on which the group is still working and would particularly like to
receive feedback.
The Messaging API specification defines a high-level interface to
Messaging functionality, including SMS, MMS and Email. It includes
APIs to create, send and receive messages. The specification does not
replace RFCs for Mail or SMS URLs, but includes complementary
functionality to these.
Security: The API defined in this specification can be used to create
and subscribe for incoming messages through different technologies.
Sending messages usually have a cost associated to them, especially
SMSs and MMSs. Furthermore this cost may depend on the message attributes
(e.g. destination address) or external conditions (e.g. roaming status).
Apart from billing implications, there are also privacy considerations
due to the capability to access message contents. A conforming
implementation of this specification must provide a mechanism that
protects the user's privacy and this mechanism should ensure that no
message is sent or no subscription is establisehd without the user's
express permission.
A user agent must not send messages or subscribe for incoming ones
without the express permission of the user. A user agent must acquire
permission through a user interface, unless they have prearranged
trust relationships with users, as described below. The user interface
must include the URI of the document origin, as defined in HTML 5... A
user agent may have prearranged trust relationships that do not require
such user interfaces. For example, while a Web browser will present a
user interface when a Web site request an SMS subscription, a Widget
Runtime may have a prearranged, delegated security relationship with
the user and, as such, a suitable alternative security and privacy
mechanism with which to authorize that operation...." More Infor
Subscribe to:
Posts (Atom)