Large-scale digital libraries and book digitization projects are poised
to go beyond prototypes into the mass market. "All the published
literature of humankind in the next generation will be in digital form,"
says Brewster Kahle, cofounder of the Internet Archive and one of the
driving forces behind the nonprofit Open Content Alliance (OCA) an open
digitization consortium. "And all the older materials that will be used
by younger people (except for a very few) will be online. So, if we want
something to be used by the next generation, it has to be online. That's
an understood premise. It's now also understood that it's not that
expensive to get there." Librarians tackling the new digitization
projects contend with complex technological issues. Notable among them
is creating metadata schemas that work across multiple technologies and
organizations. How best to provide multilingual services is another issue.
However, the issue of who will control the digitization process, and its
concomitant economic and access ramifications, is far more convoluted...
Interoperability poses several difficulties. Digitization is available
in several common formats for text-heavy books. Developing metadata for
such books is therefore easier than it is for multimedia materials spread
across multiple institutions. Metadata compatibility will likely present
the greatest challenges and the greatest opportunity for developers in
this market. The European Digital Library (EDL) will most likely opt for
a metadata scheme based on the Dublin Core standard. Presumably, as the
EDL work progresses, mapping technologies will evolve to support semantic
queries. This, in turn, will enable application-level interoperation
without the need for separate, complex, and expensive application-level
interoperability profiles.
No comments:
Post a Comment