What is Multimedia?
Multimedia is media and content that
uses a combination of different content forms. Multimedia includes a
combination of text, audio, still images, animation, video,
and interactivity content forms.
Multimedia
is usually recorded and played, displayed or accessed by information
content processing devices, such as computerized and electronic devices,
but can also be part of a live performance.
Multimedia also
describes electronic media devices used to store and experience
multimedia content. The term "rich media" is synonymous
for interactive multimedia. Hypermedia can be considered one
particular multimedia application.
History of Multimedia
The
term "multimedia" was coined by Bob Goldstein to promote the July
1966 opening of his "LightWorks at L'Oursin" show
at Southampton, Long Island . On August
10, 1966, Richard Albarino borrowed the terminology as the
latest multi-media music-cum-visuals to debut as discotheque fare.
Two years later, in 1968, the term “multimedia” was re-appropriated to
describe the work of a political consultant, David Sawyer, the husband of Iris
Sawyer—one of the producers at L’Oursin.
In the
intervening forty years, the word has taken on different meanings. In the late
1970s the term was used to describe presentations consisting of multi-projector
slide shows timed to an audio track. However, by the 1990s 'multimedia' took on
its current meaning.
In the
1993 first edition of McGraw-Hill’s Multimedia: Making It Work, Tay Vaughan declared “Multimedia is any
combination of text, graphic art, sound, animation, and video that is delivered
by computer. When you allow the user – the viewer of the project – to control
what and when these elements are delivered, it is interactive multimedia.
When you provide a structure of linked elements through which the user can
navigate, interactive multimedia becomes hypermedia.”
In
common usage, the term multimedia refers to an electronically delivered
combination of media including video, still images, audio, text in such a way
that can be accessed interactively. Much of the content on the web today falls
within this definition as understood by millions. Some computers which were
marketed in the 1990s were called "multimedia" computers because they
incorporated a CD-ROM drive, which allowed for the delivery of several hundred
megabytes of video, picture, and audio data.
Categorization of Multimedia
Multimedia
may be broadly divided into linear and non-linear categories.
Linear active content progresses without any navigational control for the
viewer such as a cinema presentation. Non-linear content offers
user interactivity to control progress as used with a computer
game or used in self-paced computer based
training. Hypermedia is an example of non-linear content.
Multimedia presentations can
be live or recorded. A recorded presentation may allow interactivity via
a navigation system. A live multimedia presentation may allow
interactivity via an interaction with the presenter or performer.
Multimedia
presentations may be viewed in person
on stage, projected, transmitted, or played locally with
a media player. A broadcast may be a live or recorded multimedia
presentation. Broadcasts and recordings can be
either analog or digital electronic media technology.
Digital online multimedia may be downloaded or streamed.
Streaming multimedia may be live or on-demand.
Multimedia
games and simulations may be used in a physical environment with special
effects, with multiple users in an online network, or locally with an offline
computer, game system, or simulator.
Application of Multimedia
Multimedia
finds its application in various areas including advertisements,
art, education, entertainment, engineering, medicine, mathematics, business, scientific
research and spatial/temporal applications. Several examples are as
follows:
1. Creative industries
Creative
industries use multimedia for a variety of purposes ranging from fine
arts, to entertainment, to commercial art, to journalism, to media and
software services provided for any of the industries listed below. An
individual multimedia designer may cover the spectrum throughout their career.
Much
of the electronic old and new media used by commercial
artists is multimedia. Exciting presentations are used to grab and keep
attention in advertising. Business to business, and interoffice
communications are often developed by creative services firms for
advanced multimedia presentations beyond simple slide shows to sell ideas or
liven-up training. Commercial multimedia developers may be hired to design
for governmental services and nonprofit applications as well.
2. Entertainment and fine arts
In
addition, multimedia is heavily used in the entertainment industry, especially
to develop special effects in movies and animations. Multimedia games are
a popular pastime and are software programs available either as CD-ROMs or
online. Some video games also use multimedia features. Multimedia
applications that allow users to actively participate instead of just sitting
by as passive recipients of information are called Interactive Multimedia.
In Arts there are multimedia artists, whose minds are able to blend
techniques using different media that in some way incorporates interaction with
the viewer... Another approach entails the creation of multimedia that can be
displayed in a traditional fine arts arena, such as an art gallery.
Although multimedia display material may be volatile, the survivability of the
content is as strong as any traditional media. Digital recording material may
be just as durable and infinitely reproducible with perfect copies every time.
3. Education
In Education,
multimedia is used to produce computer-based training courses
(popularly called CBTs) and reference books like encyclopedia and almanacs. A
CBT lets the user go through a series of presentations, text about a particular
topic, and associated illustrations in various information formats. Edutainment is an informal term
used to describe combining education with entertainment, especially multimedia
entertainment.
Learning
theory in the past decade has expanded dramatically because of the introduction
of multimedia. Several lines of research have evolved (e.g. Cognitive
load, Multimedia learning, and the list goes on). The possibilities for
learning and instruction are nearly endless.
The
idea of media convergence is also
becoming a major factor in education, particularly higher education. Defined as
separate technologies such as voice (and
telephony features), data (and productivity applications) and video that
now share resources and interact with each other, synergistically creating new
efficiencies, media convergence is rapidly changing the curriculum in
universities all over the world. Likewise, it is changing the availability, or
lack thereof, of jobs requiring this savvy technological skill.
Newspaper
companies all over are also trying to embrace the new phenomenon by
implementing its practices in their work. While some have been slow to come
around, other major newspapers like The New York Times, USA Today and The
Washington Post are setting the precedent for the positioning of the newspaper
industry in a globalized world.
4. Engineering
Software
engineers may use multimedia in Computer Simulations for
anything from entertainment to training such as military or
industrial training. Multimedia for software interfaces are often
done as collaboration between creative professionals and software
engineers.
5. Industry
In
the Industrial sector, multimedia is used as a way to help present
information to shareholders, superiors and coworkers. Multimedia is also
helpful for providing employee training, advertising and selling products all
over the world via virtually unlimited web-based technology
6. Mathematical and scientific research
In mathematical and scientific
research, multimedia is mainly used for modeling and simulation. For example,
a scientist can look at a molecular model of a particular
substance and manipulate it to arrive at a new substance.
7. Medicine
In Medicine, doctors can
get trained by looking at a virtual surgery or they can simulate how
the human body is affected by diseases spread
by viruses and bacteria and then develop techniques to
prevent it.
8. Document imaging
Document
imaging is a technique that takes hard copy of an image/document and
converts it into a digital format (for example, scanners).
Information Superhighway (the i-way)
The information superhighway was a popular term used
through the 1990s to refer to digital communication systems and the internet telecommunications network.
It is a proposed high-speed communications
system that was touted by the Clinton/Gore administration to enhance education
in America
in the 21st Century. Its purpose was to help all citizens regardless of their
income level. The Internet was originally cited as a model for this
superhighway; however, with the explosion of the World Wide Web, the
Internet became the information superhighway.
The information superhighway directly connects
millions of people, each both a consumer of information and a potential
provider. Most predictions about commercial opportunities on the information
superhighway focus on the provision of information products, such as video on
demand, and on new sales outlets for physical products, as with home shopping.
The information superhighway brings together millions of individuals who could
exchange information with one another.
Multimedia
Highway also refers to the digital communication system that is used to
carry the multimedia content from one computing system to another in a network.
It may be materialized using high speed fiber optics, radio frequencies,
wireless links, or the usual communication subsystems like Broadband
connection, ISDN, DSL, ADSL, etc.
As a part of highway infrastructure management, photographic
logging systems are used widely in highway departments. However, the technology
associated with current photographic logging systems is largely based on analog
video technology. New technologies in digital video, the Internet, and database
management have provided opportunities to overcome the limitations of analog
video and improve the accessibility and functionality of photographic logging
and traditional engineering databases. MMHIS
(multimedia-based highway information system) is an integrated database
system combining traditional highway engineering site data with visual graphic
and roadway videos. The objective of developing the Web-based MMHIS was to
establish a Web site and a server to provide simultaneous information
retrievals of the multimedia database through the Internet and the highway
agency's internal computer network.
What is a Project?
A
project is usually a collaborative undertaking that involves research or design
and is carefully planned to achieve a particular aim.
Project Objectives
Project objectives define target status at the
end of the project, reaching of which is considered necessary for the
achievement of planned benefits. They can be formulated as S.M.A.R.T: Specific,
Measurable, Achievable, Realistic and Time terminated (bounded). The evaluation
occurs at the project closure. However a continuous guard on the project
progress should be kept by monitoring and evaluating. It is also worth noting
that SMART is best applied for incremental type innovation projects
The
stages of a project
Any project, be it a multimedia or a web project
goes through a series of stages and each stage should be completed before other
stages begin while some stages may be skipped or combined. In a multimedia
project, there are four basic stages:
- Planning and Costing:
Every project begins with
an idea and we define its objectives. Before we begin working on the multimedia
project we need to plan out the writing skills, graphic art, music, video and
other multimedia expertise that we may require. A creative structure with a navigational
system should be developed to allow our users to view the content and messages
of our multimedia project. Time estimation and cost calculations become the
integral part of the multimedia project. A prototype is developed just to make
sure whether our project is feasible or not.
- Designing and Producing:
Each planned tasks are performed to create a finished
product. Feedback cycles are common during this stage. Feedback cycles play an
important role in satisfying the end user’s expectations.
- Testing:
Test your program to make sure that they meet the desired
objectives of your project, work properly on the intended delivery platforms
and meet the needs of your end users.
- Delivering:
Package and Deliver the final product to the end user.
Basic requirements for creating a
multimedia project
·
Hardware
·
Software
·
Good ideas, talent and skill
·
Time and money (for consumable
resources)
·
Team work
o
Art work is performed by graphic
artists
o
Video shoots by video producers
o
Sound editing by sound producers
o
Programming by programmers
However, following are the essential requirements for
creating a multimedia project
1.
Hardware:
The two most significant platforms for producing and
delivering multimedia projects are the Apple Macintosh OS, the Microsoft
Windows OS and the Linux OS. Among them, the first two are the widely used
platforms throughout the globe and most common platforms for developing and
delivering the multimedia content. The Macintosh and the windows PC offers the
combination of affordability, software availability, and worldwide
obtainability.
The basic principles for creating and editing multimedia
content are same for all types of platforms. But the compatibility issue is the
major problem when we see it in the bigger picture. Still there are number of
format converters for the problem of compatibility. The binary compatible files
require no conversion at all. The talk about the platform-independent delivery
of multimedia is still a drawback because of emerging technologies and with
newer versions of web browsers being introduced in shorter period of times.
These failures in delivering cross platform content consumes great amount of
time.
2.
Software:
Multimedia software gives instruction to the available
hardware about what to do (like displaying a certain color, moving a picture
from one location to another, play a sound, turning down the volumes while a
song is being played, etc)
Multimedia software tools can be divided into three types:
a)
tools like word processors and image/sound editors
-These tools are used to create multimedia elements
b)
multimedia authoring tools
-enable
you to create a final application merely by linking together objects, such as a paragraph
of text, an illustration, or a
song. By defining the objects' relationships to each other, and by sequencing
them in an appropriate order, authors (those who use authoring tools) can
produce attractive and useful graphics applications. Most authoring systems also support a scripting language for more sophisticated
applications
c)
tools for developing multimedia on the internet
3.
Creativity:
Before beginning a multimedia project, we must first develop
a sense of its scope and content. The most precious asset to create a
multimedia project is our creativity. Creativity cannot be copied but some
people reverse-engineer a well created multimedia project and then create a
multimedia project using similar approaches and techniques. Taking inspirations
from earlier projects, good multimedia developers can modify and add their own
creative touches for designing their own unique multimedia projects.
Creativity is not an in-born talent and it is very hard to
learn creativity. In case of creating multimedia projects, one should know the
hardware and software and all the available tools first. Once we know all the basics
of our available resources, creativity comes quite easily.
4.
Organization:
It is important to develop an organized outline and a plan
that easily describes the skills, time, tools, cost and resources that we may
need for a project. These should be done before we start to render graphics,
sounds and other components and a protocol should be established for naming the
files for their quick retrieval. These files are called assets and they should
be monitored throughout the project’s execution.
Multimedia Team
It is a typical team of experts for developing multimedia or
it can be taken as a group of people who designs interface, scans and processes
images, produces video or writes the script for a final delivery of a
multimedia project.
A multimedia production team may require following discrete
roles
·
Executive producer
·
Producer/Project Manager
·
Creative Director/Multimedia Designer
·
Art Director/Visual Designer
·
Artist
·
Interface designer
·
Game Designer
·
Subject matter expert
·
Instructional designer/training specialist
·
Scriptwriter
·
Animator (2D/3D)
·
Sound producer
·
Music composer
·
Video producer
·
Multimedia programmer
·
HTML coder
·
Lawyer/media acquisition
·
Marketing director
However, the following roles of the multimedia team is
essential for creating any multimedia project
1.
Project Manager
A project manager’s role is at the center of the action, who
is responsible for the overall development and implementation of a project as
well as for day-to-day operations. Project manager budgets the project, creates
schedules, manages creative sessions, regulates time sheets, invoices the
project and cooperates for the team work.
A project manager has two major areas of responsibility: Design and Management. Design consists
of creating a vision of the product, working out for the complete functionality
with the design team and then creating the final product and adjusting it as
necessary throughout the development of the product. The management side
consists of scheduling and assigning tasks, running meetings, and managing the
milestones that must be reached under the stipulated time. A good project
manager must completely understand the strength and limitations of hardware and
software so that he/she can make good decisions about what to do and what not.
2.
Multimedia Designer
A multimedia designer looks at the overall content of a
project, creates a structure for the content, determines the design elements
required to support that structure and decides which media are appropriate for
presenting the pieces of content. In short, a multimedia designer prepares a
blueprint for the entire project: content,
media and interaction.
A multimedia designer is responsible to create a pleasing
and aesthetic look in the multimedia project with appealing mix of color, shape
and type. The project should maintain visual consistency and navigational clues
like links should be clear and simple.
·
Instructional Designer
are specialists in education or training and make sure that the subject matter
is clear and properly presented for the intended audience
·
Information Designer
create structure content, determine user pathways and feedback, and select
presentation media based on an awareness of the strengths of the many separate
media that makes up the multimedia.
A multimedia designer should have following skills
·
Analysis and Presentations Skills
o
For analyzing structural content,
matching with presentation methods
o
For viewing information from different
points of view and should be able to shift views whenever required
·
Interpersonal skills
o
For interacting with team members,
clients, and other experts
·
Technological and Human Skills
3.
Interface Designer
An interface provides control to the people who use it and
provides access to the media part (text, images, graphics, animation, and
audio/video) of multimedia. An interface designer enables user to move within
the multimedia project with simplicity, and also to use the backgrounds, icons
and control panels of the multimedia project. The role of an interface designer
is to create a software that organizes the multimedia content, lets the user to
access or modify that content and presents the content on the screen. Any
interface essentially has three areas: information
design, interactive design and media design. A good interface designer creates a product
that rewards exploration and encourages its use.
4.
Writer
Multimedia writes creates character, action, point of view
and also creates interactivity. They are responsible for writing proposals,
script voice-overs, and actor’s narration. They write test screens to deliver
messages and they develop characters designed for an interactive environment. Writers of text screens are also called
content writers and they collect information from content experts, synthesize
it and communicate it in a clear and concise manner. On the other hand, script writers write dialog, narration
and voice-overs.
5.
Video Specialist
A video specialist shoots and edits all the video footage,
transfers the video to a computer, and prepares a complete video file for
efficient delivery on CD/DVD or Web. A video specialist still needs an entire
team of videographers, sound technicians, lighting designers, set designers,
script supervisor, production assistants and actors.
The workflow of a successful video project starts with good
video and sound material. Post production includes mixing, adding titles,
creating graphics and special effects.
6.
Audio Specialist
An audio specialist make a multimedia project come alive by
designing and producing music, voice-over narrations, and sound effects. They
receive help from composers, audio engineers and recording technicians. They
are also responsible for locating and selecting suitable music, scheduling
recording sessions, digitizing and editing recorded materials into computer
files.
7.
Multimedia Programmer
Also known as a software engineer, a multimedia programmer,
integrates all the multimedia elements of a project into final product using an
authoring system or programming language. Multimedia programming functions
range from coding simple display of multimedia elements to controlling devices
like CD or DVD players and managing complex timings, transitions and record
keeping.
8.
The Sum of Parts
Successful multimedia projects begin with the selection of
team players and selection is the beginning of a team building process that
continues throughout the project. Team
Building refers to the
activities that help a group and the members work at the highest levels of
performance by creating a working culture. Encouragement to communication and
decision making models should be developed that directly respect individual
talent, expertise and personalities.
Multimedia
Hardware
The selection of
proper platform for developing a multimedia project is usually based on our
personal preference of computer, the project delivery requirements and the type
of material and content of the project. It is believed that Macintosh platform
is smoother and easier than the Windows platform when it comes to developing
multimedia. However, hardware and authoring tools for both of the platforms
have improved greatly. Hardware and software can be easily acquired and
installed in both of the platforms.
The Macintosh
Platform
All Macintoshes can record and play sound and include
hardware/software for digitizing and editing video and producing CD/DVD discs.
Unlike the windows environment, where users can operate any application with
the use of a keyboard input, the Macintosh requires a mouse.
In 1994, Apple introduced the first power Macintosh computer
based on reduced instruction set computing (RISC) microprocessor and it was
typically used in engineering workstations and commercial database servers.
Later, Apple designed and built a new line of RISC-based models with the help
of IBM and Motorola. In 1997, the G3 series was introduced with clock speed of
more than 233 MHz that offered higher performance than the Pentium machines. In
2003, the G4 computers offered gigahertz speeds and with dual processor mode
that increased the performance 20 times higher than the G3 while running
applications like Photoshop. In 2006, Apple adopted Intel’s processor
architecture that allowed Macintosh to run with any x86 operating systems.
The Windows
Platform
A windows computer is a collection of parts that are tied up
together by the requirements of Windows operating system like the power
supplies, processor, hard disks, CD-ROM, video and audio components, monitors,
keyboards, mouse. One can easily collect all the hardware parts and assemble
our own computer to run windows saving considerable costs.
Today, Window’s Multimedia PC (MPC) is equipped with network
support audio, CD-ROM drive, plenty of RAM and processor speed and high
resolution monitor.
Hardware
Peripherals
In any computing facility, hardware peripherals are required
to accept the user’s input, process them and give a desired output and store
them when necessary. This sums up the categorization of hardware peripherals
into
·
Connection Hardware
·
Memory and Storage Devices
·
Input Devices
·
Output Devices
Connections
Hardware
Connection hardware transfers all the data and instruction
from the input devices for processing and subsequently yields output in the
output devices. Choice of connection hardware depends upon the type of data
that we are trying to transfer. A high quality video data may take more time
when it is being transferred from connection hardware with less data transfer
rate. The different connection hardware and their data transfer rate are shown
below.
Connection
|
Transfer Rate
|
115 Kbit/s
|
|
115 Kbit/s
|
|
Original USB
|
12 Mbit/s
|
IDE
|
3.3-16.7 Mbit/s
|
SCSI-1
|
5 Mbit/s
|
SCSI-2 (Fast SCSI)
|
10 Mbit/s
|
Fast Wide SCSI
|
20 Mbit/s
|
Ultra SCSI (SCSI-3)
|
20 Mbit/s
|
Ultra IDE
|
33 Mbit/s
|
Wide Ultra SCSI
|
40 Mbit/s
|
Hi-Speed USB
|
480 Mbit/s
|
IEEE-1394 (Firewire)
|
100-1600 Mbit/s
|
SCSI (Small
Computer System Interface)
Pronounced “skuzzy”, SCSI adds peripheral devices such as
disk drives, scanners, CD-ROM and other devices that conform to the SCSI
standard. SCSI
is most commonly used for hard disks and tape drives, but it can connect a wide
range of other devices, including scanners and CD drives.
SCSI cards can be installed in PCs, and external peripheral
devices such as hard disks, CD-ROM drives, tape drives, printers, scanners etc.
A separate drive letter is mounted whenever a SCSI device is connected. Floppy
drives are assigned the drive letter A and B, while C drive is mounted as hard
disk, CD ROM as drive D, and SCSI devices as E, F, G and so on. A typical SCSI
device can connect up to 8 devices (ID numbers 0 to 7) while an ULTRA SCSI can
connect up to 32 devices to a computer. Serial SCSI allows for hot swapping,
improved fault tolerance and faster data rates.
(Hot
swapping and hot plugging are terms used to describe the
functions of replacing system components without shutting down the system)
While using SCSI we may need to adjust cable lengths and
reconfigure terminating resistors. The IDs assigned to peripherals should not
be either 0 or 7 and the same ID should not be assigned to two different
devices.
SCSI-1 transfers data at the rate of 5 MB/s and supports up
to 7 devices. SCSI-2 is divided into FAST-SCSI (10 MB/s) and WIDE SCSI (with
increased bus width to 16-bit). A composite of Fast/Wide SCSI can have the
transfer rate of up to 20 MB/s. SCSI-3 also known as ultra SCSI can support up
to 300 MB/s.
The major advantage of SCSI is that it doesn’t demand CPU
time and it is often preferred for real time video editing, network services
and in situations in which writing simultaneously to two or more disks
(mirroring) is required.
Memory and Storage
Devices
While creating a multimedia project, one must take note of
the memory and the storage devices that is available. Color images, texts,
sound, video clips and the programming code all requires memory and we may need
more memory if there are more elements. We also may need to allocate memory for
storing and archiving working files used during production, audio and video
clips, edited pieces, paperwork, and backup of our project files.
Random Access
Memory
A Random Access Memory (RAM) is of volatile type and is
essential in a program’s execution and for temporary data storage. In
multimedia application, RAM plays a vital role. Though we have a higher
processing speed but have insufficient RAM, the multimedia application may not
give desired result in time. A fast processor without enough RAM may waste
processor cycles while it swaps needed portions of program code into and out of
memory. Increased RAM will show more
performance improvement that upgrading the processor.
Read Only Memory
Read only memory (ROM) is non-volatile and it doesn’t loose
any content. ROM is used to hold small BIOS programs that boots up the computer
and typically it is used in printers to hold built-in fonts. Programmable ROMs
(EPROM) allow changes to be made that are not forgotten. Optical ROM (OROM) is
provided in data cards and content is written once.
Floppy and Hard
Disks
Adequate storage for your production environment can be
provided by large-capacity hard disks; a server-mounted disk on a network; Zip,
Jaz, or Syquest removal cartridges; optical media; CD-R (compact
disc-recordable) discs; tape; floppy disks; banks of special memory devices; or
any combination of the above.
Floppy disks and hardware disks are mass-storage devices for binary data-data that can be easily
read by the computer. Hard disks can contain much more information than floppy
disks and can operate at far better data transfer rates. In scale of things,
floppies are, however, no longer “mass-storage” devices, and will soon be
eliminated from the storage methods of choice.
A floppy disk is made
of flexible Mylar plastic coated with a thin layer of special magnetic
material. A hard disk is actually a stack of hard metal platters coated with magnetically
sensitive material, with a series of recording heads or sensors that hover a
hairbreadth above the fast-spinning surface, magnetizing or demagnetizing spots
along formatted tracks using technology similar to that used floppy disk and
audio and video tape recording, Hard disks are most common mass-storage device
used on computers, and for marking multimedia, we need one or more
large-capacity hard disk drives.
As multimedia has reached consumer desktops, makers of hard
disks have been challenged to build smaller-profit, larger-capacity, faster,
and less-expensive hard disk.
Zip, Jaz, Syquest,
and Optical Storage Devices
For years, the Syquest
44MB removable cartridges were the most widely used portable medium among
multimedia developers and professionals, but Iomega’s inexpensive Zip drives with there likewise
inexpensive 100MB, 250MB, and 750MB cartridges, built on floppy disk
technology, significantly penetrated Syquest’s market share for removable
media. Iomega’s Jaz cartridges,
built based on hard disk drive technology, provided one or two gigabytes of
removable storage media and have fast enough transfer rates for multimedia
development.
Magneto-optical
(MO) drives use a high-power laser to hit tiny
spots on the metal oxide coating of the disk. While the spot is hot, a magnet
aligns the oxides to provide a 0 or 1 (on or off) orientation. Like Syquest’s
and other hard disks, this is rewritable technology, because spots can be
repeatedly heated and aligned. Moreover, this media is normally not affected by
stray magnetism (it needs both heat and magnetism to make changes), so these disks
are particularly suitable for archiving data. The data transfer rate is,
however, slow compared to Zip, Jaz, and Syquest technologies. One of the most
popular formats uses a 128MB-capacity disk—about the size of a 3.5-inch floppy.
Larger-format magneto-optical drives with 5.25-inch cartridges offering 650MB
to 1.3GB of storage are also available.
The Jaz
drive was a removable disk storage system, introduced by the Iomega Company in 1995. The system as
since been discontinued. The Jaz disks were originally released with a 1 GB
capacity (there was also 540 MB, but it was unreleased) in a 3½-inch form
factor, which was a significant increase over Iomega's most popular product at
the time, the Zip drive with its 100 MB capacity. The Jaz
drive utilized only the SCSI interface (the IDE internal
version is rare), but an adapter known as Jaz
Traveller was available to
connect it to a standard Parallel
Port.
The capacity was later increased to 2 GB through a drive and
disk revision in 1998, before the Jaz line was ultimately discontinued in 2002.
Unlike the Zip, which is floppy disk technology, the Jaz is hard disk drive technology, utilizing rigid
platters. The Jaz drive is a type of removable rigid disk (RRD) drive, with two
platters (4 writable surfaces) contained within a thick cartridge with sliding
door. Like the Zip disk, the Jaz disk has a reflective piece in the bottom
corner so the drive may detect its capacity before loading. The drive itself
contains the spindle motor, read/write heads, voice-coil actuator and drive
controller. The Jaz disk is spun at approximately 5000 RPM. The Jaz drive
showed much promise, with a system very similar to high-end laptop hard drives to
park the heads and the drive automatically brakes the disk using reverse torque
before auto-eject, unlike most of SyQuest's devices. The Jaz drive is much more
fragile than the Zip drive and unlike the Zip can only be used lying
horizontally on a flat surface.
SyQuest Technology, Inc., now known as SYQT,
Inc., was an early entrant into the removable hard disk market for personal computers. The company was
started in 1982 by Syed Iftikar;
it was named partially after himself because of a company meeting wherein it
was decided that "SyQuest" ought to be a shortened name for
"Sy's Quest". Its earliest products were 3.9" (100mm) removable hard drives, and 3.9" (100mm)
ruggedized hard drives for IBM
compatibles and military applications.
For many years SyQuest held the market,
particularly as a method of transferring large desktop publisher documents to printers. SyQuest aim their products
to give personal computer users "endless" hard drive space for
data-intensive applications like desktop publishing, Internet information
management, pre-press, multimedia, audio, video, digital photography, fast backup, data
exchange, archiving, confidential data security and easy portability for the
road.
SyQuest 44 MB removable disk cartridge
Digital Versatile
Disc (DVD)
In December 1995, nine major electronics companies (Toshiba,
Matsushita, Sony, Philips, Time Warner, Pioneer, JVC, Hitachi , and Mitsubishi Electrics) agreed to
promote a new optical disc technology for distribution of multimedia and
feature-length movies called Digital
Versatile Disc (DVD).
With this new medium capable not only of gigabyte storage
capacity but also full motion video (MPEG2) and high-quality audio in surround
sound, the bar has again risen for multimedia developers. Commercial multimedia
projects will become more expensive to produce as consumers’ performance
expectations rise. There are three types of DVD, including DVD-Read Write, DVD video
and DVD-ROM. These types reflect
marketing channels, not the technology.
There are three formats for manufacturing and writing DVDs:
DVD-R/RW, DVD+R/RW (with a plus sign), and DVD-RAM (random access memory). Dual
Layer recording allows DVD-R and DVD+R discs to store more data, up to 8.5
Gigabytes per disc, compared with 4.7 Gigabytes for single-layer discs. DVD-R
DL (dual layer) was developed for the DVD Forum by the Pioneer Corporation.
DVD+R DL (double layer) was developed for the DVD+RW Alliance by Sony.
CD-ROM Players
Compact disc
read-only memory (CD-ROM) players have
become an integral part of the multimedia development workstation and are an
important delivery vehicle for large, mass-produced projects. A wide variety of
developer utilities, graphics backgrounds, stock photography and sounds,
applications, games, reference texts, and educational software are available
only on the medium.
CD-ROM players have typically been very slow to access and
transmit data (150 KBps, which is the speed required of consumer Red Book Audio
CDs), but development have led to double-, triple-, quadruple-speed, 24x, 48x,
and 56x drives designed specifically for computer (not Red Book Audio) use.
These faster drives spool up like washing machines on the spin cycle and can be
somewhat noisy, especially if the inserted compact disc is not evenly balanced.
CD Recorders
With a compact disc recorder, you can make your own CDs;
using special mats of CD-ROM and CD-Audio. Software, such as Roxio’s Toast and
Easy CD Creator, lets you organize files on your hard disk(s) into “virtual”
structure, and then writes them to the CD in that order. CD-R discs are made
differently than normal CDs but can play in any CD-Audio or CD-ROM player.
These write-once, enhanced CDs make excellent high-capacity file archives and
are used extensively by multimedia developers for pre-mastering and testing
CD-ROM projects and short-run distribution of finished multimedia projects.
CD-RW
A CD-RW recorder can rewrite 700MB of data to a CD-RW disc
about 1000 times. Except for their capability of totally erasing a disc, CD-RWs
act similar to CD-Rs and are subject to the same restrictions. Writing sessions
must be closed before they can read in a CD-ROM drive or players, and though
they can be extended, in most cases they cannot be overwritten. To reuse a
CD-RW, you must first blank it.
Input
Device
A great variety of input devices—from the familiar keyboard
and handy mouse to touch screens and voice recognitions setups—can be used for
the development and delivery of a
multimedia projects. If you are designing your project for a public kiosk, use
a touchscreen. If your project is for a lecturing professor who likes to wander
about the classroom, use remote handheld mouse. If you create a great deal of
original computer-rendered art, consider a pressure-sensitive stylus and
drawing tablet.
Keyboards
A keyboard is the most common method of interaction with a
computer. Keyboards provide various tactile responses and have various layouts
depending upon your computer system and keyboard model. Keyboards are typically
rated for at least 50 million cycles (the number of times a key can be pressed
before it might suffer breakdown).
The most commonly keyboard for PCs is the 101 style (which provides
101 keys), although many styles are available with more or fewer special keys,
LEDs, and other features, such as a plastic membrane cover for industrial or
food-service applications or flexible “ergonomic” styles. Macintosh keyboards
connect to the USB port, which manages all form of user input—from digitizing
tablets to mice. Wireless keyboards use low-powered radio or light (infrared)
waves to transmit data between devices.
Mice
A mouse is a standard tool for interacting with a graphical
user interface (GUI). All Macintosh computers require a mouse on PCs, mice are
not required but recommended. Even though the Windows environment accepts keyboard
entry in lieu of mouse point-and-click actions, your multimedia project should
typically be designed with the mouse or touchscreen in mind. The buttons on the
mouse provide additional user input, such as pointing and double-clicking to
open a document, or the click-and-drag operation, in which the mouse button is
pressed and held down to drag (move) an object, or to move to and select an
item on a pull-down menu, or to access context-sensitive help. The standard
Apple mouse has one button; PC mice may have as many as three.
Trackballs
Trackballs are similar to mice, except that the cursor is moved by
using one or more fingers to roll across the top of the ball. The trackball
does not need the flat space required by mouse, which is important in small
confined environments and for portable laptop computers. Trackballs have at
least two buttons: one for the user to click or double-click, and other to
provide the press-and-hold condition necessary for selecting from menus and
dragging objects.
Touchscreens
Touchscreens are monitors that usually have a textured coating across
the glass face. This coating is sensitive to pressure and registers the
location of the user’s finger when it touches the screen. The TouchMate
system, which has no coating, actually measures the pitch, roll, and yaw
rotation of the monitor when pressed by a finger, and determines how much force
was exerted and the location where a force was applied. Other touchscreens use
invisible beams of infrared light that crisscross the front of the monitor to
calculate where a finger was pressed. Pressing twice n a screen in quick
succession simulates the double-click action of a mouse. Touching the screen
and dragging the finger, without lifting it, to another location simulates a
mouse click-and-drag. A keyboard is sometimes simulated using an on-screen
representation so users can input names, numbers and other text by pressing
“keys”.
Touchscreens are not recommended for day-to-day computer
work, but are excellent for multimedia applications in a kiosk, at a trade
show, or in a museum delivery system—anything involving public input and simple
tasks. When your project is designed to use a touchscreen, the monitor is the
only input device required, so you can secure all other system hardware behind
locked doors to prevent theft or tampering.
Magnetic Card
Encoders and Readers
Magnetic card setups are useful when you need an interface
for a database application or multimedia project that tracks users. You need
both a card encoder and a card reader for this type of interface. The magnetic card encoder connects to the
computer at a serial port and transfer information to a magnetic strip of a
tape on the back of the back of the card. The magnetic card reader then reads the information encoded on the
card. A visitor to a museum, for example, could slide an encoded card through a
reader at any exhibit station and be rewarded with personalized or customized
response from an intelligent database or presentation system. French-speaking
visitors to a Norwegian museum, for instance, could hear an exhibit described
in French. This is common method for reading credit card information and
connecting it to your bank account.
Graphics Tablets
Flat-surface input devices are attached to the computer in
the same way as a mouse or a trackball. A special pen is used against the
pressure-sensitive surface of the tablet to move the cursor. Graphics tablets provide substantial
control for editing finely detailed graphic elements, a feature very useful to
graphic artists and interface designers. Tablets can also be used as the input
devices for end users: you can design a printed graphic, place it on the
surface of the tablet, and let users work with a pen directly on the input
surface. On a floor plan, for instance, visitors might draw a track through the
hallways and rooms they wish to see and then receive a printed list of things
to note along the route. Some tablets are pressure sensitive and are good for
drawing: the harder you press the stylus, for example, the wider or darker the
line you draw. Graphic artists who try these usually fall prey to Vaughan ’s One-Way Rule
and never return to drawing with the mouse.
Scanners
A scanner may be the most useful piece of equipment you will
use in the course of producing a multimedia project. There are flat-bed,
handheld, and drum scanners, though the most commonly available are color, flat-bed
scanners that provide a resolution of 600 dots
per inch (dpi) or better. Professional graphics houses may use even higher
resolution drum scanners. Handheld scanners can be useful scanning small images
and column of text, but they may prove inadequate for your multimedia
development.
Be aware that scanned images, particularly those at high
resolution and in color, demand an extremely large amount of storage space on
your hard disk, no matter what instrument is used to do the scanning. Also
remember that the final monitor display resolution for your multimedia project
will probably be just 72 or 95 dpi—leave the very expensive
ultra-high-resolution scanners for the desktop publishers. Most inexpensive
flat-bed scanners offer at least 600 dpi resolution, and most allow you to set
the scanning resolution.
Scans let you make clear electronic images of existing
artwork such as photos, ads, pen drawings, and cartoons, and can save many
hours when you are incorporating proprietary art into your application. Scans
also can give you a starting point for your own creative diversions.
Optical Character
Recognition (OCR) Devices
Scanners enable you to use optical character recognition (OCR) software, such as OmniPage from
ScanSoft, a division of Nuance Communications, or Recore from Maxsoft-Ocron, an
ActiveX programming interface option the allows programmers to create custom
applications including web-based OCR software and a scanner, you can covert
paper documents into a word processing document on your computer without
retyping or rekeying.
Barcode readers are probably the most familiar optical
character recognition in use today—mostly at markets, shops, and other
point-of-purchase locations. Using photo cells and laser beams, barcode readers
recognize the numeric characters of the Universal
Product Code (UPC) that are printed in a patter of parallel black bars on
merchandise labels. With OCR, or barcoding,
retailers can efficiently process
goods in and out of their stores and maintain better inventory control.
An OCR terminal can be use to a multimedia developer because
it recognizes not only printed characters but also handwriting. This facility
may be beneficial at a kiosk or in a general education environment where user
friendliness is a goal, because there is growing demand for a more personal and
less technical interface to data and information.
Infrared Remotes
An infrared remote unit lets a user interact with your
project while he or she is freely moving about. Remotes work like mice and
trackballs, except they use infrared light to direct the cursor and require no
cables to communicate. Remote mice work well for a lecture or other
presentation in an auditorium or similar environment, when the speaker needs to
move around the room.
Voice Recognition
Systems
For hands-free interaction with your projects, try voice recognition systems. These
behavioral biometric systems usually provide a unidirectional cardioids,
noise-canceling microphone that automatically filters out background noise and
learn, to recognize voiceprints. Most voice recognition systems currently
available can trigger common menu event such as Save, Open, Quit, and Print,
and you can teach the system to recognize other commands that are more specific
to your application. Systems available for the Macintosh and Windows
environments typically must be taught to recognize individual voices and then
be programmed with the appropriate responses to be the recognized word or
phrase. Dragon’s Naturally Speaking takes dictation, translate text to speech
and does command-to-click, a serious aid for people unable to use their hands.
Digital Cameras
Digital cameras use the same CCD technology as video
cameras. They capture still images of a given number of pixels (resolution),
and the images are stored in the camera’s memory to be uploaded later to a
computer. The resolution of a digital camera is determined by the number of
pixels on the CCD chip, and the higher the Megapixel rating, the higher the
resolution of the camera. Images are uploaded from the camera’s memory using a
serial, parallel, or USB cable, or, alternatively, the camera’s memory card is
inserted into a PCMCIA reader connected to the computer. Digital cameras are
small enough to fit in a cell phone and, in a more complicated manner they can be
used in a television studio or spy camera on an orbiting space craft.
Output
Hardware
Presentation of the audio and visual components of your
multimedia project requires hardware that may or may not be included with the
computer itself, such as speakers, amplifiers, monitors, motion video devices,
sound recorders and capable storage systems. It goes without saying that the
better the equipment is, of course, the better the presentation. There is no
greater taste of benefits of good output hardware than to feed the audio output
of your computer into an external amplifier system: suddenly the bass sound
become deeper and richer, and even music sampled at low quality may sound
acceptable.
Audio Devices
All Macintosh are equipped with an internal speaker and a
dedicated sound chip, and they are capable of audio output without additional
hardware and/or software. To take advantage of built-in stereo sound, external
speakers are required.
Digitizing sound on your Macintosh requires an external
microphone and sound editing/recording software.
Monitors
The monitor you need for development of multimedia projects
depend on the type of multimedia application you are creating, as well as what
computer you are using. A wide variety of monitors is available for both Macintoshes
and PCs. High-end, large-screen graphics monitors and LCD panels are available
for both, and they are expensive.
Serious multimedia developers will often attach more than
one monitor to their computers, using add-on graphics boards. This is because
many authoring systems allow you to work with several open windows at a time,
so you can dedicate one monitor to viewing the work you are creating or
designing, and you can perform various editing task in windows on other
monitors that do not block the view of your work. Developing in Director is
best with at least two monitors, one to view your work, the other to view the
Score. A third monitor is often added by Director developers to display the
card. For years, one of the advantages of the Macintosh for making multimedia
was that it is very easy to attach multiple monitors for development work.
Connecting with Windows 98, PCs can be configured for more than one monitor.
Video Devices
No other contemporary message medium has the visual impact
of video. With a video digitizing board installed in your computer, you can
display a television picture on your monitor. Some boards include a
frame-grabber feature for capturing the image and turning it into a color
bitmap, which can be saved as a PICT or TIFF file ant then used as part of a
graphic or a background in your project.
Display of video of any computer platform requires
manipulation of an enormous amount of data. When used in conjunction with
videodisc players, which give you precise control over the images being viewed,
video cards let you place an image into a window on the computer monitor; a
second television screen dedicated to video is not required. And video cards
typically come with excellent special effects software.
There are many video cards available today. Most of these
support various video-in-a-window sizes, identification of source video, setup
of play sequences or segments, special effects, frame grabbing, digital
moviemaking; and some have built-in television tuners so you can watch your
favorite programs in a window while working on other things. Good video greatly
enhances your project; poor video will ruin it.
Projectors
When you need to show your material to more viewers than can
huddle around a computer monitor, you will need to project it onto a large
screen or white-painted wall. Cathode-ray
tube (CRT) projectors, liquid crystal display (LCD), Digital Light
Processing (DLP) projectors and Liquid crystal on silicon (LCOS) projectors and
(for larger projects) Grating-Light-Valve (GLV) technologies are available. CRT
projectors have been around for quit a while—they are the original “big-screen”
televisions. They use three separate projection tubes and lenses (red, green,
and blue), and the three color channels of light must “converge” accurately on
the screen. Setup, focusing, and alignment are important for getting a clear
and crisp picture. CRT projectors are compatible with the output of most
computers as well as televisions.
LCD panels are portable devices that fit in a briefcase. The panel is
placed on the glass surface of a standard overhead projector available in most
schools, conference rooms, and meeting halls. While the overhead projector does
the projection work, the panel is connected to the computer and provides the
image, in thousands of colors and, with active-matrix technology, at speeds
that allow full-motion video and animation. Because LCD panels are small, they
are popular for on-the-road presentations, often connected to a laptop computer
and using a locally available, overhead projector.
More complete LCD projection panels contain a projection
lamp and lenses and do not require a separate overhead projector. They
typically produce an image brighter and sharper than the simple panel model,
but they are somewhat larger and cannot travel in a briefcase.
DLP projectors (it’s done with mirrors) use a semiconductor
chip arrayed with microscopic mirrors that are laid out in a matrix know as a
Digital Micromirror Device (DMD), where each mirror represent one pixel in the
projected image. Rapid repositioning of the mirrors reflects light out through
the lens or to a heatsink (light dump).
Liquid crystal on silicon (LCOS or LCos) is a micro-display
technology mostly used for projection televisions. It is a reflective technology
similar to DLP projectors but it uses liquid crystals instead of individual
mirrors.
Grating-Light-Valves (GLVs) compete with high-end CRT
projectors and use diffraction of laser (rd, green and blue) light using an
array of tiny movable ribbons mounted on a silicon base. The GLV uses six
ribbons as the diffraction gratings for each pixel. The alignment of the
gratings is altered by electronic signals. This displacement controls the
intensity of the diffracted light. These units are expensive, but the image
from a light-valve projector is very bright and color-saturated and can ne
projected onto screens as wide as ten meters, at any aspect ratio.
Communication
Devices
Many multimedia applications are developed in a workgroups
comprising instructional designers, writers, graphic artists, programmers, and
musicians located in the same office space or building. The workgroup members’
computers are typically connected on a local area network (LAN). The client’s
computers, however, may be thousands of miles distant, requiring other methods
for good communication.
Communication among workgroup members and with the client is
essential and accurate completion project. Normal U.S. Postal Service mail
delivery is too slow to keep pace with most projects; overnight express
services are better. And when you need it immediately, an Internet connection
is required. If your client and you are both connected to the Internet, a
combination of communication by e-mail and by FTP (File Transfer Protocol) may be the most cost-effective and
efficient solution for both creative development ant project management.
In the workplace, use quality equipment and software for
your communications setup. The cost—both time and money—of stable and fact
networking will be returned to you.
Modems
Modems can be connected to your computer externally at the
serial port or internally as a separate board. Internal modems often include
fax capability. Be sure you have a Hayes
compatible modem. The Hayes AT
standard command set (named for the ATTENTION command that precedes all
other commands) allows you to work with most software communications packages.
Modem speed, measured I baud, is the most important
consideration. Because the multimedia files that contain the graphics, audio
resources, video samples, and progressive versions of your project are usually
large, you need to move as much data as possible in as short a time as
possible. Today’s standards dictate at least a V.90 56 kbps modem. Compression saves significant transmission time
and money, especially over long distance. Today, tens of millions of people use
a V.90 modem to connect to the Internet.
Modems modulate and de-modulate analog signals. According to
the laws of physics, copper telephone lines and the switching equipment at the
phone companies’ central offices can handle modulated analog signals up to
about 28,000 bps on “clean” lines, so 56 kbps V.90 depends on hardware based
compressing algorithms to crunch the data before sending it, decompressing it
upon arrival at the receiving end. If have already compressed your data into a
.sit, .sea, .arc, or .zip file, you may reap added benefit from the compression
because it is difficult to compress an already-compressed file.
ISDN and DSL
For higher transmission speeds by telephone, you will need
to use Integrated Service Digital
Network (ISDN), Switched-56, T1, T3, DSL, ATM, or another of the telephone
companies’ Digital Switched Network services.
ISDN lines offer a 128 Kbps data transfer rate—twice as fast
as a 56 Kbps analog modem. ISDN lines (and the required ISDN hardware) are used
for Internet access, networking, and audio and video conferencing. These
dedicated telephone lines are more expensive than conventional analog or POTS (plain old telephone service)
lines, so analyze your costs and benefits carefully before upgrading.
Newer and faster Digital
Subscriber Line (DSL) technology using a dedicated copper line has
overtaken ISDN in popularity. DSL uses signal frequencies higher than those
used by voice or fax. When a DSL filter is connected to your phone jack, it
splits the data (Internet) traffic from voice (phone) traffic, so voice traffic
(talking on the phone and fax signals) goes to the phone or the fax machine
while data traffic (surfing the web, downloading large files or photos) goes to
your computer. You can do both at the same time.
Basic
Software Tools
The basic tool set for building multimedia projects contains
one or more authoring system and various editing application for text, images,
sounds, and motion video. A few additional applications are also useful for
capturing images from the screen, translating file formats, and moving files
among computers when you are part of team-these are tools for the housekeeping
tasks that make your creative and production life easier.
Text Editing and Word Processing Tools
A word processor is usually the first software tool
computer users learn. From letters, invoices, and storyboards to project
content, your word processor may also be your most often used tool, as you
design and build a multimedia project. The better our Keyboarding or
typing skill is, the easier and more efficient our multimedia day-to-day life
will be.
Typically, an office or workgroup will choose a single word
processor to share documents in a standard format. And most often, that word
processor comes bundled in an office suite that might include
spreadsheet, database, e-mail, web browser, and presentation applications.
Word processors such as Microsoft Word and WordPerfect are
powerful applications that include spell checkers, table formatters,
thesauruses, and prebuilt templates for letters, resumes, purchase orders, and
other common documents. In many word processors, you can actually embed
multimedia elements such as sounds, images, and video. Luckily, the population
of single-finger typists is decreasing over times as children are taught
keyboarding skills in conjunction with computer lab programs in their schools.
Painting and Drawing Tools
Painting and drawing tools, as well as 3-D modelers, are
perhaps the most important items in your toolkit because, of all the multimedia
elements, the graphical impact of your project will likely have the greatest
influence on the end user. If your artwork is amateurish, or flat and
uninteresting, both you and your users will be disappointed.
One should look for these features in a
drawing or painting packages:
§ An intuitive graphical user interface with pull-down menus,
status bars, palette control, and dialog boxes for quick, logical selection
§ Scalable dimensions, so you can resize, stretch, and distort
both large and small bitmaps
§ Paint tools to create geometric shape, from squares to
circles and from curves to complex polygons
§ Ability to pour a color, patterns and clip art
§ Customizable pen and brush shapes and sizes
§ Support for scalable text fonts and drop shadows
§ Multiple undo capabilities, to let you try again
§ History function for redoing effects, drawings, and text
§ Property inspector
§ Screen capture facility
§ Painting features such as smoothing coarse-edged objects
into the background with anti-aliasing; airbrushing in variable sizes, shapes,
densities, and patterns; washing colors in gradients; blending; and masking.
§ Support for third-party special-effect plug-ins
§ Object and layering capabilities that allow you to create
separate elements independently
§ Zooming, for magnified pixel editing
§ All common color depth: RGB, HSB, Grayscale, CMYK
§ Good palette management and dithering capability among color
depths using various color models such as RGB, HSB, and CMYK
§ Good file importing and exporting capability for image formats such as PIC, GIF, TGA, TIFF,
PNG, WMF, JPG, PCX, EPS, PTN, and BMP
§ TGA also known as TARGA is an acronym for Truevision Advanced Raster Graphics Adapter; TGA is an initialism for Truevision Graphics Adapter
§ PCX = "Personal Computer eXchange
§ Encapsulated PostScript (EPS) is a standard file format for
importing and exporting PostScript files. It is usually a single page
PostScript program that describes an illustration or an entire page. The
purpose of an EPS file is to be included in other pages. Sometimes EPS files
are called EPSF files. EPSF simply stands for Encapsulated PostScript Format.
§
PTN: PaperPort Thumbnail
Image
3-D Modeling and Animation Tools
3-D
modeling software has increasingly entered the mainstream of graphic design as its case of
use improves. As a result, the graphic production values and expectation for multimedia
projects have risen. With 3-D modeling software, objects rendered in perspective
appear more realistic; we can create stunning scenes and wander through them,
choosing just the right lighting and perspective for our final rendered image.
Powerful modeling packages such as AutoDesk’s
Maya, Strata 3D, and Avid’s SoftImage are also bundled with assortments
of pre-rendered 3-D clip art objects such as people, furniture, buildings,
cars, airplanes, trees, and plants. Specialized applications for creating and
animating 3-D text are discussed. Important for multimedia developers, many 3-D
modeling applications also include export features enabling you to save a
moving view or journey through your scene as a Quick Time or MPEG file. Each
rendered 3-D image takes from a few seconds to a few hours to complete, depending
upon the complexity of the drawing and the number of drawn objects included in
it. If you are making a complex walkthrough or flyby, plan to set aside many
hours of rendering time on your computer.
A good 3-D modeling tool should include the following features:
§ Multiple windows that allow you to
view your model in each dimension, from the camera’s perspective, and in a
rendered preview
§ Ability to drag and drop primitive
shape into a scene
§ Ability to create and sculpt
organic objects from scratch
§ Lathe and extrude features
§ Color and texture mapping
§ Ability to add realistic effects
such as transparency, shadowing, and fog
§ Ability to add spot, local, and
global lights, to place them anywhere, and manipulate them for special lighting
effects
§ Unlimited cameras with focal
length control
§
Ability to draw spline-based paths for animation
§ Spline modeling, also known as patch modeling, is an efficient
alternative to using polygons for modeling. A spline in 3D modeling is a line
that describes a curve. The curve is defined by a number of points. The curved
spline can then be lathed or extruded to create 3D geometry. A mesh created
from intersecting splines consists of areas called patches.
Image-Editing Tools
Image-editing
applications are specialized and powerful tools for enhancing and retouching
existing bitmapped images. These applications also provide many of the features
and tools of painting and drawing programs and can be used to create images
from scratch as well as images digitized from scanners, video frame-grabbers,
digital cameras, clip art files, or original artwork files created with a
painting or drawing package.
Here
are some features typical of image-editing applications and of interest to
multimedia developers:
§
Multiple windows that provide views of
more than one image at a time
§
Conversion of major image-data type and
industry-standard file formats.
§
Direct inputs of images from scanner
and video sources
§
Employment of a virtual memory scheme
that uses hard disk space as RAM for images that require large amounts of
memory
§
Capable selection tools , such as
rectangles, lassos, and color balance
§
Good masking features
§
Multiple undo and restore features
§
Anti-aliasing capability, and
sharpening and smoothing controls
§
Color-mapping controls for precise
adjustment of color balance
§
Tools for retouching, blurring,
sharpening, lightening, darkening, smudging, and tinting
§
Geometric transformations such as flip,
skew, rotate, and distort, and perspective changes
§
Ability to resample and resize an image
§
24-bit color, 8- or 4-bit indexed
color, 8-bit gray-scale, black-and –white, and customizable color palettes
§
Ability to create images from scratch,
using line, rectangle, square, circle, ellipse, polygon, airbrush, paintbrush,
pencil, and eraser tools, with customizable brush shapes and user-definable
bucket and gradient fills
§
Multiple typefaces, styles, and sizes,
and type manipulation and masking routines
§ Filters
for special effects, such as crystallize, dry brush, emboss,
facet, fresco, graphic pen, mosaic, pixelize, poster, ripple, smooth, splatter,
stucco, twirl, watercolor, wave, and wind
§
Support for third-party special-effect
plug-ins
§
Ability to design in layers that can be
combined, hidden, and reordered
OCR Software
Often we have printed matter and other
text to incorporate into our project, but no electronic text file. Using
optical character recognition (OCR)
software, a flat-bed scanner, and our computer, we can save many hours of retyping
printed words, and get the job done faster and more accurately than a roomful
of typists.
OCR software turns bitmapped characters
into electronically recognizable ACSCII text. A scanner is typically used to
create the bitmap. Then the software breaks the bitmap into chunks according to
whether it contains text or graphics, by examine the texture and density of
areas of the bitmap and by detecting edges. The texture areas of the image are
then converted to ASCII characters using probability and expert system
algorithms. Most OCR applications claim about 99 percent accuracy when reading
8 to 36-point printed characters at 300 dpi and can reach processing speeds of
150 characters per second. These programs do, however, have difficulty
recognizing poor copies of originals where the edges of characters have bled;
these and poorly received faxes in small print may yield more recognition
errors than it is worthwhile to correct after the attempted recognition.
Sound Editing Tools
Sound editing tools for both digitized
and MIDI sound let us see music as well as hear it. By drawing a representation
of a sound in fine increments, whether a score for a waveform, we can cut,
copy, paste, and otherwise edit segments of it with great precision—something
impossible to do in real-time.
System sounds are shipped with both Macintosh
and Windows systems, and they are available as soon as you install the
operating system. System sounds are the beeps used to indicate an error,
warning, or special user activity. Using sound editing software, we can make
our own sound effects and install them as system beeps. We will need to install
software for editing digital sounds.
We can usually incorporate MIDI sound
files into our multimedia project without learning any special skills but to
make our own MIDI files we require clear understanding about the way music is
sequenced, scored, and published. We need to know about tempos, clefs,
notations, keys, and instruments. And we will need a MIDI synthesizer or device
connected to our computer. Many MIDI applications provide both sequencing and
notation capabilities, and some let us edit both digital audio and MIDI within
the same application.
Animation, Video, and Digital Movie Tools
Animations and digital video movies are
sequence of bitmapped graphic scenes (frames) rapidly played back. But animations
can also be made within the authoring system by rapidly changing the location
of objects, or spirits, to generate
an appearance of motion. Most authoring tools adopt either a frame-or
object-oriented approach to animation, but rarely both.
Moviemaking tools typically take
advantage of QuickTime for Macintosh and Windows and Macintosh Video for Windows, also known as Audio Video Interleaved (AVI), and lets us create, edit, and
present digitized motion video segments, usually in a small window in our
project.
To make movies from video, we may need
special hardware to convert the analog video signal into digital data. Macs and
PCs with FireWire (IEEE 1394) ports can import digital video directly from
digital camcorders. Moviemaking tools such as Premiere, Final Cut Pro,
VideoShop, and MediaStudio Pro let us edit and assemble video clips captured
from camera, tape, other digitized movie segments, animations, scanned images,
and from digitized movie segments, animations, scanned images, and from digitized
audio or MIDI files. The completed clip, often with added transition and visual
effects, can then be played back—either stand-alone or windowed within our
project.
Video Formats
Formats and systems for sorting and
playing digitized video to and from disk files are available with QuickTime and
AVI. Both systems depend on special algorithms that control the amount of
information per video frame that is sent to the screen, as well as the rate at
which new frames are displayed. Both provide a methodology for interleaving, or
blending, audio data with video and other data so that sound remains
synchronized with the video. And both technologies allow data to stream from
disk into memory in a buffered and organized manner. DVD (Digital Versatile Disc) is a hardware format defining a very dense, two-layered disc that uses
laser light and, in the case of recordable discs, heat to store and read
digital information. The digital information or software on a DVD is typically multiplexed
audio, image, text, and video data optimized
for motion picture display using MPEG encoding.
QuickTime is an organizer of
time-related data forms. Classic videotape involves a video track with two
tracks of (stereo) audio; QuickTime is a multitrack recorder in which you can
have an almost unlimited range of tracks. Digitized video, digitized video,
digitized sound, computer animations, MIDI
data, external devices such as CD-ROM players and hard disks, and even the
potential for interactive command systems are all supported by the QuickTime
format. With QuickTime, you can have a movie with five different available
languages, titles, MIDI cute tracks, or the potential for interactive commands.
Compression Movie Files
Image compression algorithms are
critical to the delivery of motion video and audio on both the Macintosh and PC
platforms. Without compression, there is simply not enough bandwidth on the
Macintosh or PC to transfer the massive amounts of data involved in displaying
a new screen image every 1/30 of a second.
To understand compression, consider
these three basic concepts:
- Compression ratio:
The compression ratio represent
the size of the original image divided by the size of the compressed
image—that is, how much the data is actually compressed. Some compression
schemes yield ratios that are dependent on the image content: a busy image
of multicolored tulips may yield a very small compression ratio, and an
image of blue ocean and sky may yield a very high compression ratio. Video
compression typically from image to image (the delta).
- Image quality:
Compression is either lossy or lossless. Lossy schemes ignore picture information that the viewer may
not miss, but that means the picture information is in fact lost—even
after decompression. And as more and more information is in fact lost—even
after decompression. And as more and more information is removed during
compression, image quality decreases. Lossless
schemes preserve the original data precisely—an important
consideration in medical imaging, for example. The compression ratio
typically affects picture quality because, usually, the higher the
compression ratio, the lower the quality of the decompressed image.
- Compression/decompression speed:
You will prefer a fast compression time while developing your projects.
Users, on the other hand, will appreciate a fast decompression time to
increase display performance.
For compressing video frames, the MPEG
format used for DVD employs three types of encoding: I-Frames (Intra), P-Frames
(Predicted), and B-Frames (Bi-directional Predicted). Each type crams more or
less information into the tiniest possible storage space. For example, B- and
P-Frames only contain information that has changed from one frame to the next,
so a sequence from a “talking head” interview might only contain data for the
movement of lips (as long as the rest of the subject is still). B- and P-Frames
cannot be played on their own, then, because they contain only the information
about the lips that changed; the complete image is based on the data stored in
the I-Frame. Sequences of these frame types are complied into a GOP (Group of
Pictures), and all GOPs are stitched into a stream of images. The result is an
MPEG video file.
Multimedia Authoring Tools
Multimedia authoring tools provide the important
framework you need for organizing and editing the elements of your multimedia
project, including graphics, sounds, animations, and video clips. Authoring
tools are used for designing interactivity and the user interface, for
presenting your project on screen, and for assembling diverse multimedia
elements into a single, cohesive product.
Authoring software provides an
integrated environment for binding together the content and functions of your
project, and typically includes everything you need to create, edit, and import
specific types of data; assemble raw data into a playback sequence or cue
sheet; and provide a structured method or language for responding to user
input. With multimedia authoring software, you can make
- Video
productions
- Animations
- Games
- Interactive
web sites
- Demo
disks and guided tours
- Presentations
- Kiosk
applications
- Interactive
training
- Simulations,
prototypes, and technical visualization
Types of Authoring Tools
This chapter arranges the various
multimedia authoring tools into groups, based on the method used for sequencing
or organizing multimedia elements and events:
- Card-
or page-based tools
- Icon-based,
event-driven tools
- Time-based
tools
Card-based or page-based
tools are authoring systems, wherein the elements are organized as pages of
a book or a stack of cards. Thousands of pages or cards may be available in the
book or stack. These tools are best used when the bulk of your content consists
of elements that can be viewed individually, like the pages of book or cards in
a card file. The authoring system lets you link these pages or cards into
organized sequences. You can jump, on command, to any page you wish in the
structured navigation pattern. Card- or page-based authoring systems allow you
to play sound elements and launch animations and digital video.
Icon- or object-based, event-driven
tools are authoring systems, wherein multimedia elements and interaction cues
(events) are organized as objects in a structural framework or process. Icon-
or object-based, event-driven tools simplify the organization of your project
and typically display flow diagrams of activities along branching parts. In
complicated navigational structures, this charting is particularly useful
during development.
Time-based tools are authoring systems, wherein elements
and events are organized along a timeline, with resolutions as high as or
higher than 1/30 second. Time-based tools are best to use when you have a
message with a beginning and an end. Sequentially organized graphic frames are
played back at a speed that you can set. Other elements (such as audio events)
are triggered at a given time or location in the sequence of events. The more
powerful time-based tools let you program jumps to any locations in a sequence,
thereby adding navigation and interactive control.
Objects
In multimedia authoring systems,
multimedia elements and events are often treated as objects that live in a hierarchical order of parent and child
relationships. Messages passed among these objects order them to do things according to the properties or modifiers assigned to them. In this way, for example, Teen-child (a
teenager object) may be programmed to take out the trash every Friday evening,
and does so when they get a message from Dad. Spot, the puppy, may bark and
jump up and down when the postman arrives, and defined by barking and jumping
modifiers. Objects typically take care of themselves. Send them a message and
they do their thing without external procedures and programming. Objects are
particularly useful for games, which contain many components with many
“personalities”, all for simulating real-life situations, events, and their
constituent.
Different Stages of Authoring
There are five distinct stages of
multimedia authoring:
- Analysis What
do you need to do and what do you use to do it?
- Design
Create storyboards to tell the story of the project.
- Development
Incorporate data and set it up as a prototype or model.
- Evaluation
When the prototype application works the way you want it to, test it
again, fine-tune it, make it sexy, and then review your work.
- Distribution When it is ready to go (after the evaluation phase), make it real, package and distribute it.
There
are two basic kinds of computer font file data formats
Bitmap fonts
consist of a series of dots or pixels representing the image of each glyph
in each face and size.
Outline fonts (also called vector fonts) use Bézier curves, drawing instructions and mathematical formulae to describe each glyph (writing element), which make the character outlines scalable to any size.
Bitmap fonts are faster and easier to use in computer code, but inflexible, requiring a separate font for each size. Outline fonts can be resized using a single font and substituting different measurements, but are somewhat more complicated to use than bitmap fonts as they require additional computer code to render the outline to a bitmap for display on screen or in print.
Outline fonts (also called vector fonts) use Bézier curves, drawing instructions and mathematical formulae to describe each glyph (writing element), which make the character outlines scalable to any size.
Bitmap fonts are faster and easier to use in computer code, but inflexible, requiring a separate font for each size. Outline fonts can be resized using a single font and substituting different measurements, but are somewhat more complicated to use than bitmap fonts as they require additional computer code to render the outline to a bitmap for display on screen or in print.
True Type
TrueType is an outline font standard originally developed by Apple Computer in the late 1980s as a competitor to Adobe's Type 1 fonts.
TrueType is an outline font standard originally developed by Apple Computer in the late 1980s as a competitor to Adobe's Type 1 fonts.
The
primary strength of TrueType was originally that it offered font developers
a high degree of control over precisely how their fonts are displayed, right
down to particular pixels, at
various font sizes. TrueType has long been the
most common format for fonts on Mac OS and Windows, although both also include native support for
Adobe's Type 1 format.
PostScript (PS) is a dynamically typed concatenative programming language created by John Warnock and Charles Geschke in 1982. PostScript is best known for
its use as a page description language in the electronic and desktop publishing areas. PostScript fonts are outline font specifications developed by Adobe Systems for professional digital typesetting, which uses PostScript file formats like .pfb (printer font
binary-PC), .afm (adobe font metric-MAC) and .pfa (printer font ASCII-LINUX) to
encode font information. PostScript ATM (Adobe Type Manager) has many formats
of PostScript fonts like Type 0, Type 1, 2, 3, 4, 5, 9, 10, 11, 14, 32, 42.
Animation
Animation makes static presentation comes alive.
It is defined as a visual change over time that adds great power to our
multimedia projects and web pages. Many multimedia applications for both
Macintosh and Windows provide animation tools.
Principles of Animation
Animation
is possible because of a biological phenomenon known as persistence of vision and a psychological phenomenon called phi. An object seen by the human eye
remains chemically mapped on the eye’s retina for a brief time after viewing.
Combined with the human mind’s need to conceptually complete a perceived
action, this make it possible for a series of images that are changed very
slightly and very rapidly, one after the other, to seemingly blend together
into a visual illusion of movement.
Television
video builds 30 entire frames or pictures every second; the speed with which
each frames is replaced by the next one makes the images appear to blend
smoothly into movement. Movies on film are typically shot at a shutter rate of
24 frames per second.
Animation by Computer
Using
appropriate software and techniques, we can animate visual images in many ways.
The simplest animations occur in two-dimensional (2-D) space; more complicated
animations occur in an intermediate “21/2-D” space where shadowing,
highlights, and forced perspective provide an illusion of depth, the third
dimension; and the most realistic animations occur in three-dimensional (3-D)
space.
In
2-D space , the visual changes that bring an image alive occur on the flat
Cartesian x and y axes of the screen. A blinking word, a color-cycling logo (where color changes rapidly) or a button or tab
that changes state on mouse rollover to let a user know it is active are all
examples of 2-D animations. These
are simple and static, not changing their position on the screen. Path animation in 2-D space increases
the complexity of an animation and provides motion, changing the location of an
image along a predetermined path during a specified amount of time. Authoring
and presentation software such as Flash or PowerPoint provide user-friendly
tools to compute position changes and redraw an image in a new location.
In
21/2-D animation, an
illusion of depth (the z axis) is added to an image through shadowing and
highlighting, but the images itself still rests on the flat x and y axes in two
dimensions. Embossing, shadowing, beveling, and highlighting provide a sense of
depth by raising an image or cutting it into a background.
In 3-D animation, software creates a virtual realm in three dimensions, and changes (motion) are calculated along all three axes (x, y, and z), allowing an image or object that itself is created with a front, back, sides, top, and bottom to move towards or away from the viewer, or, in this virtual space of light sources and points of view, allowing the viewer to wander around and get a look at all the object’s parts from all angles. Such animations are typically rendered frame by frame by high-end- 3-D animation programs such as NewTek’s lightwave or AutoDesk’s Maya.
In 3-D animation, software creates a virtual realm in three dimensions, and changes (motion) are calculated along all three axes (x, y, and z), allowing an image or object that itself is created with a front, back, sides, top, and bottom to move towards or away from the viewer, or, in this virtual space of light sources and points of view, allowing the viewer to wander around and get a look at all the object’s parts from all angles. Such animations are typically rendered frame by frame by high-end- 3-D animation programs such as NewTek’s lightwave or AutoDesk’s Maya.
Animation Techniques
While
creating an animation, we have to organize its execution into a series of
logical steps. First, we have to gather up all the activities we wish to
provide in the animation. If it is complicated, we may wish to create a written
script with a list of activities and required objects and then create a
storyboard to visualize the animation. Second, we choose the animation tool that
is best suited for the job, and then build and tweak the sequences. This may
include creating objects, planning their movements, texturing their surface,
adding lights, experimenting with lighting effects, and positioning the camera
or point of view. We should allow plenty of time for this phase when we are experimenting
and testing. Finally, we have to post-process the animation, by doing special
renderings and adding sound effects.
Cel Animation
The
animation techniques made famous by Disney use a series of progressively
different graphics or cels on frame of movie film (which plays at 24 frames per
second). A minute of animation may thus require as many as 1,440 separate
frames, and each frame may be composed for many layers of cels. The term cel drives from the clear celluloid
sheets that were used for drawing each frame, which have been replaced today by
layers of digital imagery. Cels of famous animated cartoons have become
sought-after, suitable-for-framing collector’s items.
Cel animation artwork begins with keyframes. For example, when an
animated figure of a woman walks across the screen, she balances the weight of
her entire body on one foot and then other in a series of falls and recoveries,
with the opposite foot and leg catching up to support the body. Thus the first
keyframe to portray a single step might be the woman pitching her body weight
forward off the left foot and leg, while her center of gravity shifts forward;
the feet are close together, and she appears to be falling. The last keyframe
might be the right foot and leg catching the body’s fall, with the center of
gravity now centered between the outstretched stride and the left and right
feet positioned far apart.
The
series of frames in between the keyframes are drawn in a process called
tweening. Tweening is an action that requires calculating the number of
frames between keyframes and the path
the action takes, and then actually sketching with pencil the series of
progressively different outlines. As tweening progresses, the action sequence
is checked by flipping through the frames.
Kinematics
Kinematics
is the study of the movement and motion of structures that have joints, such as
a walking man. Animating a walking step is tricky: you need to calculate the
position, rotation, velocity, and acceleration of all the joints and articulated
parts involved- knees bends, hips flex, shoulders swing, and the head bobs.
e-frontier’s Poser, a 3-D modeling program, provides pre-assembled adjustable
human models in many poses, such as “walking” or “chinking”. Surface textures can then be applied to
create muscle-bound hulks. Inverse
Kinematics, available in high-end 3-D programs such as Lightwave and Maya,
is the process by which we link objects such as hands to arms and define their
relationship and limits (for example, elbows cannot bend backwards). Once those
relationships and parameters have been set, we can then drag these parts around
and let the computer calculate the result.
Morphing
Morphing is a popular effect in which one image
transforms into another. Morphing applications and other modeling tools that
offer this can transition not only between still images but often between
moving images as well.
Animation File Formats
The
different animation file format includes Director (.dir and .dcr), Animation
Pro (.fli using 320 x 200 pixel images and .flc), 3D Studio MAX (.max),
SuperCard and Director (.pics), CompuServe GIF89a(.gif) and Flash (.fla and
.swf). Since file size is a critical factor for all animation formats while
downloading them in the internet, selection of appropriate file compression
technique is an essential part of preparing animation for the web. For example,
a Director’s native file format (.dir) must be preprocessed and compressed into
a proprietary shockwave animation file (.dcr) for the web. This compression
decreases at least 75% file size significantly speeding up the download and
display time in the internet. Flash, which is widely used animation application
for web, uses vector graphics to keep post compression file size minimum. The
native .fla files must be converted into Shockwave Flash file (.swf) to be able
to play in the web. To view animations in the web, certain plugins, players or
add-ons like Flash player are required.
With
3-D animation, the individual rendered frames of an animation are put together
into one standard digital video file format like Windows Audio Video
Interleaved (.avi), QuickTime(.qt and .mov) or Motion Picture Experts Group
video (.mpeg or .mpg). These files can be played using the media players that
come with the computer operating systems.
Video
Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion. With video elements in our project, we can effectively present our message. But carelessly produced video may degrade our presentation. Video requires the highest performance, memory and storage on our computer than any other multimedia elements. To deliver the video across network, we may use superfast RAID (Redundant array of independent disks) system that support high speed data transfer rates.
Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion. With video elements in our project, we can effectively present our message. But carelessly produced video may degrade our presentation. Video requires the highest performance, memory and storage on our computer than any other multimedia elements. To deliver the video across network, we may use superfast RAID (Redundant array of independent disks) system that support high speed data transfer rates.
How Video Works
When the light reflected from an object passes through a video camera lens,
the light is converted into an electronic signal by a special sensor called a
charged-coupled device (CCD). The output of the CCD is then processed by the
camera into a signal containing three channels of color information and
synchronization pulses. There are several video standards available for
managing the CCD output, and each standard differs with the amount of
separation between the components of the signal. The more separation of color
information of the signal, higher will be the quality of the image. If each
channel of color information is transmitted as a separate signal, the signal
output is called RGB, and is the preferred method for higher quality and
professional video production. The output can also be separated into two
separate chroma (color) channels, Cb/Cr (Blue and red chromas) and a luma
component channel (Y) that makes the dark and light parts of the video.
In analog systems, the video signal from the camera is delivered to the
video-in connectors of a VCR (Video Cassette Recorder), where it is recorded on
a magnetic videotape. One or two channels (mono or stereo) of sound can be
recorded on the videotape. The video signal is written on the tape by a
spinning recording head that changes the local magnetic properties of the
tape's surface in a series of long diagonal stripes. Since the VCR head is
tilted at an angle as compared to the path of the tape, it follows a helical or
spiral path which is known as helical scan recording. A single video frame is
made up of two fields that interlaced and audio is recorded in a separate
straight-line track at the top of the videotape. At the bottom of the tape is a
control track that contains the pulses used to regulate speed. A term called
tracking is used for making fine adjustment of the tape so that the tracks are
properly aligned as the tape moves across the playback head.
In digital system, the video signal from the camera is first digitized as a
single frame and the data is compressed before it is written to the tape in any
of the formats like DV (Digital Video), DVCPRO (Digital Video Cassette
Professional) or DVCAM (Digital Video Camcorder). In a professional situation,
other configurations of video tapes are available which are used with high end
video production with high end equipments.
Colored phosphors on the CRT screen glows red, green or blue when they are
energized by the electron beam. As the intensity of the beam varies while it
moves across the screen, some colors glow brighter than others. There are
finely tuned magnets around the picture tube that aims the electrons precisely
into the phosphor screen. So any speakers with strong magnets if placed aside a
CRT display may change the intensity of color at certain parts. A strong
external magnetic field will skew the electron beam to one area of screen and
sometimes may cause a permanent blotch that could not be fixed even by
degaussing. Degaussing is an electronic process that readjusts the magnets that
guides the electrons.
Broadcast Video
Standards
Analog Video Standards
There
are three analog broadcast video standards that are commonly in use around the
world. They are: NTSC, PAL and SECAM. The NTSC standard is being phased out by
ATSC (Advanced television system committee) Digital Television standard in the
United States. All these standards and formats are not easily interchangeable,
so we have to know where our multimedia project will be implemented. A video
cassette in United states that uses NTSC standard may not play on a television
set in Europe that uses PAL or SECAM even though the recording method and style
of the cassette is VHS (video home system). Each system is based on different
standards that define the way information is encoded to produce the electronic
signal that creates the television picture. Multiformat VCRs can play almost
all three standards but cannot dub from one standard to another. Dubbing
between standards require high-end and specialized equipments.
NTSC (National
Television Standards Committee)
- Used for broadcasting and
displaying video in United states, Canada, Mexico, Japan and other
countries
- Established in 1952 by National
Television Standards Committee
- Defines a method for encoding
information into an electronic signal that ultimately creates television
picture (signal)
- A single frame of video is made
up of 525 horizontal scan lines drawn into the inside face of a phosphor
coated picture tube in every 1/30th of a second by a fast
moving electron beam.
- The drawing occurs so fast that
human eye perceives the images as stable.
- The electron beam makes two
passes (at the rate of 60 per second, or 60Hz) while it draws a single
video frame
- First it lays down all the
odd-numbered lines and then all the even-numbered lines
- Each of the passes paints a field
and two fields are combined to create a single frame at a rate of 30
frames per second
- This process of creating a single
frame from two fields is known as interlacing,
which prevents flicker on television sets.
- Computer monitors uses a
progressive scan technology that draws the lines of an entire frame in a
single pass without interlacing and flicker.
PAL (Phase alternate
line)
- Used in United Kingdom, Western
Europe, Australia, South Africa, China and South America
- PAL increases the screen
resolution to 625 horizontal lines but slowed down the scan rate to 25
frames per second.
- Just like in NTSC, the even and
odd lines are interlaced, each field taking 1/50th of a second
to draw a field at 50 Hz.
SECAM
(Sequential Color and Memory)
- SECAM
taken from the
French name, Systeme Electronic pour Coleur Avec Memoire or Sequentiel
Couleur Avec Memoire
- Used in France, Eastern Europe, former
USSR, and a few other countries.
- Although SECAM is a 625-line, 50
Hz system, it differed greatly from both the NTSC and the PAL color
systems in its basic technology and broadcast method. Often, however, TV
sets sold in Europe utilized dual components and could handle both PAL and
SECAM systems.
ATSC DTV
- Federal Communications
Commissions initiated the High Definition Television (HDTV) in the 1980s,
was first changed into the Advance Television (ATV) initiative and then
finished as the Digital Television (DTV) in 1996.
- This standard, which was slightly
modified from both the Digital Television Standard (ATSC Doc. A/53) and
the Digital Audio Compression Standard (ATSC Doc. A/52), moved U.S
television from an analog to digital standard.
- It provided TV stations with
sufficient bandwidth to present four or five Standard Television (STV,
providing the NTSC’s resolution of 525 lines with a 3:4 aspect ratio, but
in a digital signal) signals or one HDTV signal (providing 1080 lines
resolution with a movie screen’s 16:9 aspect ratio).
- More significantly for multimedia
producers, this emerging standard allows for transmission of data to
computers and for new ATV interactive services.
High Definition
Television (HDTV)
- Provides high resolutions in a
16:9 aspect ratio.
- This aspect ratio allows the
viewing of Cinemascope and Panavision movies.
- There is confusion between the
broadcast and computer industries about whether to use interlacing or
progressive-scan technologies. The broadcast industries has promulgated an
ultra-high resolution, 1920x1080 interlaced format to become the milestone
of a new generation of high-end entertainment centers, but the computer
industry would like to settle on a 1280x1080 format provides more pixels
then the 1280x720 standard, the refresh rate are quite different.
- The higher-resolution interlaced
format delivers only half the picture every 1/60 of
a second and because of the interlacing, on highly detailed images there
is a great deal of screen flicker at 30 Hz.
- The computer people argue that
the picture quality at 1280x720 is superior and steady. Both formats have
been included in the HDTV standard by the Advanced Television system Committee (ATSC)
- While more and more video is
produced only for digital display platforms (for the Web, for a CD-ROM
tour, or as an HDTV DVD presentation), analog television while still used,
are rapidly being replaced by digital monitors that are becoming the most
widely installed platform for delivering and viewing video.
Digital Display
Standards
- Advanced
Television System Committee (ATSC) is the digital television standard for the United
States, Canada, Mexico, Taiwan, and South Korea. It is being considered in
other countries. It support high screen aspect ratio of 16:9 with images
up to 1920x1080 pixels in size and number of other image sizes, allowing
up to six standard-definition “virtual channels” to be broadcast on a single TV station using the existing 6
MHz channel. It boasts of “theater quality” because it uses Dolby Digital
AC-3 format to provide 5.1 channel surround sound.
- Digital
Video Broadcasting (DVB)
is used mostly in Europe where the standards define the physical layer and
data link layer of a distribution system.
- Integrated
System Digital Broadcasting (ISDB) is used in Japan to allow radio and television
stations to convert to digital format.
No comments:
Post a Comment