ABSTRACT
Haptic technology refers to technology that interfaces the user with a
virtual environment via the sense of touch by applying forces, vibrations,
and/or motions to the user. This mechanical stimulation may be used to assist
in the creation of virtual objects (objects existing only in a computer
simulation), for control of such virtual objects, and to enhance the remote
control of machines and devices (teleoperators). This emerging technology
promises to have wide-reaching applications as it already has in some fields.
For example, haptic technology has made it possible to investigate in detail
how the human sense of touch works by allowing the creation of carefully
controlled haptic virtual objects. These objects are used to systematically
probe human haptic capabilities, which would otherwise be difficult to achieve.
These new research tools contribute to our understanding of how touch and its
underlying brain functions work. Although haptic devices are capable of
measuring bulk or reactive forces that are applied by the user, it should not
to be confused with touch or tactile sensors that measure the pressure or force
exerted by the user to the interface.The term haptic originated from the Greek
word pt (haptikos), meaning pertaining to the sense of touch and comes from the
Greek verb ptesa (haptesthai) meaning to “contact” or “touch”.
HISTORY OF
HAPTICS
In the early 20th century, psychophysicists introduced the word haptics
to label the subfield of their studies that addressed human touch-based
perception and manipulation. In the 1970s and 1980s, significant research
efforts in a completely different field, robotics also began to focus on
manipulation and perception by touch. Initially concerned with building
autonomous robots, researchers soon found that building a dexterous robotic
hand was much more complex and subtle than their initial naive hopes had
suggested.In time these two communities, one that sought to understand the
human hand and one that aspired to create devices with dexterity inspired by
human abilities found fertile mutual interest in topics such as sensory design
and processing, grasp control and manipulation, object representation and
haptic information encoding, and grammars for describing physical tasks.In the
early 1990s a new usage of the word haptics began to emerge. The confluence of
several emerging technologies made virtualized haptics, or computer haptics
possible. Much like computer graphics, computer haptics enables the display of
simulated objects to humans in an interactive manner. However, computer haptics
uses a display technology through which objects can be physically palpated.
WORKING OF HAPTIC SYSTEMS
BASIC
SYSTEM CONFIGURATION
4
|
3
|
2
|
4
|
3
|
2
|
1
|
1
|
|
End effector
|
Hand
|
Actuators
|
Muscles
|
Sensors
|
Tactile &kinesthetic
Info:
|
|
Motor Commands
|
Motion
|
Contact Forces
|
Virtual object
|
Position information
|
Machine
|
Human
|
Motion
|
Forces
|
Torque Commands
|
Computer haptics
|
Sensors
|
|
Fig
2.1 haptics system configuration
Basically a haptic system consist
of two parts namely the human part and the machine part. In the figure shown
above, the human part (left) senses and controls the position of the hand,
while the machine part (right) exerts forces from the hand to simulate contact
with a virtual object. Also both the systems will be provided with necessary
sensors, processors and actuators. In the case of the human system, nerve
receptors performs sensing, brain performs processing and muscles performs
actuation of the motion performed by the hand while in the case of the machine
system, the above mentioned functions are performed by the
encoders, computer and motors respectively.
HAPTIC INFORMATION
Basically the haptic information
provided by the system will be the combination of (i) Tactile information and
(ii) Kinesthetic information.
Tactile
information refers the information acquired by the sensors which are actually
connected to the skin of the human body with a particular reference to the spatial
distribution of pressure, or more generally, tractions, across the contact
area.For example when we handle flexible materials like fabric and paper, we
sense the pressure variation across the fingertip. This is actually a sort of
tactile information. Tactile sensing is also the basis of complex perceptual
tasks like medical palpation, where physicians locate hidden anatomical
structures and evaluate tissue properties using their hands.Kinesthetic
information refers to the information acquired through the sensors in the
joints.Interaction forces are normally perceived through a combination of these
two informations.
CREATION
OF VIRTUAL ENVIRONMENT (VIRTUAL REALITY).
Virtual reality is the
technology which allows a user to interact with a computer-simulated
environment, whether that environment is a simulation of the real world or an
imaginary world. Most current virtual reality environments are primarily visual
experiences, displayed either on a computer screen or through special or stereoscopic
displays, but some simulations include additional sensory information, such as
sound through speakers or headphones. Some advanced, haptic systems now include
tactile information, generally known as force feedback, in medical and gaming
applications. Users can interact with a virtual environment or a virtual
artifact (VA) either through the use of standard input devices such as a
keyboard and mouse, or through multimodal devices such as a wired glove, the
Polhemus boom arm, and omnidirectional treadmill. The simulated environment can
be similar to the real world, for example, simulations for pilot or combat
training, or it can differ significantly from reality, as in VR games. In
practice, it is currently very difficult to create a high-fidelity virtual
reality experience, due largely to technical limitations on processing power,
image resolution and communication bandwidth. However, those limitations are
expected to eventually be overcome as processor, imaging and data communication
technologies become more powerful and cost-effective over time.
Virtual Reality is often used to
describe a wide variety of applications, commonly associated with its
immersive, highly visual, 3D environments. The development of CAD software,
graphics hardware acceleration, head mounted displays; database gloves and
miniaturization have helped popularize the motion. The most successful use of
virtual reality is the computer generated
3-D simulators. The pilots use flight simulators. These flight simulators have
designed just like cockpit of the airplanes or the helicopter. The screen in
front of the pilot creates virtual environment and the trainers outside the
simulators commands the simulator for adopt different modes. The pilots are
trained to control the planes in different difficult situations and emergency
landing. The simulator provides the environment. These simulators cost millions
of dollars.The virtual reality games are also used almost in the same fashion.
The player has to wear special gloves, headphones, goggles, full body wearing
and special sensory input devices. The player feels that he is in the real
environment. The special goggles have monitors to see. The environment changes
according to the moments of the player. These games are very expensive.
Human
Operator
|
Video
&
Audio
|
Haptic
device
|
Audio-Visual
Rendering
|
Simulation
engine
|
Haptic
Rendering
|
Virtual reality (VR) applications strive to simulate real or imaginary
scenes with which users can interact and perceive the effects of their actions
in real time. Ideally the user interacts with the simulation via all five
senses. However, today’s typical VR applications rely on a smaller subset,
typically vision, hearing, and more recently, touch.
Figure below shows the structure of a VR application incorporating visual, auditory, and haptic feedback.
Figure below shows the structure of a VR application incorporating visual, auditory, and haptic feedback.
The
application’s main elements are:
1)
The simulation engine, responsible for computing the virtual environment’s
behavior over time;
2) Visual, auditory, and haptic rendering algorithms, which compute the virtual environment’s graphic, sound, and force responses toward the user; and
2) Visual, auditory, and haptic rendering algorithms, which compute the virtual environment’s graphic, sound, and force responses toward the user; and
3) Transducers,
which convert visual, audio, and force signals from the computer into a form
the operator can perceive.
The
human operator typically holds or wears the haptic interface device and
perceives audiovisual feedback from audio (computer speakers, headphones, and
so on) and visual displays (for example a computer screen or head-mounted
display).Whereas audio and visual channels feature unidirectional information
and energy flow (from the simulation engine toward the user), the haptic
modality exchanges information and energy in two directions, from and toward
the user. This bidirectionality is often referred to as the single most
important feature of the haptic interaction modality
HAPTIC DEVICES
A haptic device is the one that provides a physical interface between the user and the virtual environment by means of a computer. This can be done through an input/output device that senses the body’s movement, such as joystick or data glove. By using haptic devices, the user can not only feed information to the computer but can also receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface.
Haptic
devices can be broadly classified into
VIRTUAL
REALITY/ TELEROBOTICS BASED DEVICES
EXOSKELETONS
AND STATIONARYDEVICES
The term exoskeleton refers to
the hard outer shell that exists on many creatures. In a technical sense, the
word refers to a system that covers the useror the user has to wear. Current haptic
devices that are classified as exoskeletons are large and immobile systems that
the user must attach him- or herself to.
GLOVES
AND WEARABLE DEVICES
These devices are smaller
exoskeleton-like devices that are often, but not always, take the down by a
large exoskeleton or other immobile devices. Since the goal of building a
haptic system is to be able to immerse a user in the virtual or remote
environment and it is important to provide a small remainder of the user’s
actual environment as possible. The drawback of the wearable systems is that
since weight and size of the devices are a concern, the systems will have more
limited sets of capabilities.
POINT SOURCES AND SPECIFIC TASK DEVICES
This is a class of devices that
are very specialized for performing a particular given task. Designing a device
to perform a single type of task restricts the application of that device to a
much smaller number of functions. However it allows the designer to focus the
device to perform its task extremely well. These task devices have two general
forms, single point of interface devices and specific task devices.
LOCOMOTION
INTERFACES
An interesting application of
haptic feedback is in the form of full body Force Feedback called locomotion
interfaces. Locomotion interfaces are movement of force restriction devices in
a confined space, simulating unrestrained mobility such as walking and running
for virtual reality. These interfaces overcomes the limitations of using
joysticks for maneuvering or whole body motion platforms, in which the user is
seated and does not expend energy, and of room environments, where only short
distances can be traversed.
FEEDBACK DEVICES
FORCE FEEDBACK DEVICES
FORCE FEEDBACK DEVICES
Force feedback input devices are
usually, but not exclusively, connected to computer systems and is designed to
apply forces to simulate the sensation of weight and resistance in order to
provide information to the user. As such, the feedback hardware represents a
more sophisticated form of input/output devices, complementing others such as
keyboards, mice or trackers. Input from the user in the form of hand, or other
body segment whereas feedback from the computer or other device is in the form
of hand, or other body segment whereas feedback from the computer or other
device is in the form of force or position. These devices translate digital
information into physical sensations.
TACTILE
DISPLAY DEVICES
Simulation task involving active
exploration or delicate manipulation of a virtual environment require the
addition of feedback data that presents an object’s surface geometry or
texture. Such feedback is provided by tactile feedback systems or tactile
display devices. Tactile systems differ from haptic systems in the scale of the
forces being generated. While haptic interfaces will present the shape, weight
or compliance of an object, tactile interfaces present the surface properties
of an object such as the object’s surface texture. Tactile Feedback applies
sensation to the skin.
COMMONLY USED HAPTIC INTERFACING DEVICES
PHANTOM
It is a haptic interfacing device developed by a company named Sensable technologies. It is primarily used for providing a 3D touch to the virtual objects. This is a very high resolution 6 DOF device in which the user holds the end of a motor controlled jointed arm. It provides a programmable sense of touch that allows the user to feel the texture and shape of the virtual object with a very high degree of realism. One of its key features is that it can model free floating 3 dimensional objects.Figure above shows the contact display design of a Phantom device. Here when the user puts one of his finger in the thimble connected to the metal arm of the phantom device and when the user move his finger, then he could really feel the shape and size of the virtual 3 dimensional object that has been already programmed inside the computer. The virtual 3 dimensional space in which the phantom operates is called haptic scene which will be a collection of separate haptic objects with different behaviors and properties. The dc motor assembly is mainly used for converting the movement of the finger into a corresponding virtual movement.
It is a haptic interfacing device developed by a company named Sensable technologies. It is primarily used for providing a 3D touch to the virtual objects. This is a very high resolution 6 DOF device in which the user holds the end of a motor controlled jointed arm. It provides a programmable sense of touch that allows the user to feel the texture and shape of the virtual object with a very high degree of realism. One of its key features is that it can model free floating 3 dimensional objects.Figure above shows the contact display design of a Phantom device. Here when the user puts one of his finger in the thimble connected to the metal arm of the phantom device and when the user move his finger, then he could really feel the shape and size of the virtual 3 dimensional object that has been already programmed inside the computer. The virtual 3 dimensional space in which the phantom operates is called haptic scene which will be a collection of separate haptic objects with different behaviors and properties. The dc motor assembly is mainly used for converting the movement of the finger into a corresponding virtual movement.
Fig 4.1 phantom
CYBERGLOVE
The principle of a Cyberglove is
simple. It consists of opposing the movement of the hand in the same way that
an object squeezed between the fingers resists the movement of the latter. The
glove must therefore be capable, in the absence of a real object, of recreating
the forces applied by the object on the human hand with (1) the same intensity
and (2) the same direction. These two conditions can be simplified by requiring
the glove to apply a torque equal to the interphalangian joint.The solution
that we have chosen uses a mechanical structure with three passive joints
which, with the interphalangian joint, make up a flat four-bar closed-link
mechanism. This solution use cables placed at the interior of the four-bar
mechanism and following a trajectory identical to that used by the extensor
tendons which, by nature, oppose the movement of the flexor tendons in order to
harmonize the movement of the fingers. Among the advantages of this structure
one can cite:
•Allows
4 dof for each finger
•Adapted
to different size of the fingers
•Located
on the back of the hand
• Apply
different forces on each phalanx (The possibility of applying a lateral force
on the fingertip by motorizing the abduction/adduction joint)
• Measure
finger angular flexion (The measure of the joint angles are independent and can
have a good resolution given the important paths traveled by the cables when
the finger shut.
Fig 4.2 cyber glove
HAPTIC RENDERING
PRINCIPLE OF HAPTIC INTERFACE
The haptic interaction occurs at
an interaction tool of a haptic interface that mechanically couples two
controlled dynamical systems: the haptic interface with a computer and the
human user with a central nervous system. The two systems are exactly
symmetrical in structure and information and they sense the environments, make
decisions about control actions, and provide mechanical energies to the
interaction tool through motions.
CHARACTERISTICS
DESIRABLE FOR HAPTIC DEVICES
1)
Low back-drive inertia and friction;
2)
Minimal constraints on motion imposed by the device kinematics so free motion
feels free;
3) Symmetric inertia, friction, stiffness, and resonant frequency properties (thereby regularizing the device so users don’t have to unconsciously compensate for parasitic forces);
4) Balanced range, resolution, and bandwidth of position sensing and force reflection; and
5) Proper ergonomics that let the human operator focus when wearing or manipulating the haptic interface as pain, or even discomfort, can distract the user, reducing overall performance.
3) Symmetric inertia, friction, stiffness, and resonant frequency properties (thereby regularizing the device so users don’t have to unconsciously compensate for parasitic forces);
4) Balanced range, resolution, and bandwidth of position sensing and force reflection; and
5) Proper ergonomics that let the human operator focus when wearing or manipulating the haptic interface as pain, or even discomfort, can distract the user, reducing overall performance.
CREATION
OF AN AVATAR
An avatar is the virtual
representation of the haptic through which the user physically interacts with
the virtual environment. Clearly the choice of avatar depends on what’s being
simulated and on the haptic device’s capabilities. The operator controls the
avatar’s position inside the virtual environment. Contact between the interface
avatar and the virtual environment sets off action and reaction forces. The
avatar’s geometry and the type of contact it supports regulate these forces. Within
a given application the user might choose among different avatars. For example,
a surgical tool can be treated as a volumetric object exchanging forces and
positions with the user in a 6D space or as a pure point representing the
tool’s tip, exchanging forces and positions in a 3D space.
SYSTEM ARCHITECTURE FOR HAPTIC RENDERING
Fig
5.1 architecture for haptic
rendering
Haptic-rendering
algorithms compute the correct interaction forces between the haptic interface
representation inside the virtual environment and the virtual objects
populating the environment. Moreover, haptic rendering algorithms ensure that
the haptic device correctly renders such forces on the human operator. Several
components compose typical haptic rendering algorithms. We identify three main
blocks, illustrated in Figure shown above.
Collision-detection algorithms detect collisions between objects and avatars in the virtual environment and yield information about where, when, and ideally to what extent collisions (penetrations, indentations, contact area, and so on) have occurred. Force-response algorithms compute the interaction force between avatars and virtual objects when a collision is detected. This force approximates as closely as possible the contact forces that would normally arise during contact between real objects. Force-response algorithms typically operate on the avatars’ positions, the positions of all objects in the virtual environment, and the collision state between avatars and virtual objects. Their return values are normally force and torque vectors that are applied at the device-body interface. Hardware limitations prevent haptic devices from applying the exact force computed by the force-response algorithms to the user. Control algorithms command the haptic device in such a way that minimizes the error between ideal and applicable
haptic device in such a way
that minimizes the error between ideal and applicable
Collision-detection algorithms detect collisions between objects and avatars in the virtual environment and yield information about where, when, and ideally to what extent collisions (penetrations, indentations, contact area, and so on) have occurred. Force-response algorithms compute the interaction force between avatars and virtual objects when a collision is detected. This force approximates as closely as possible the contact forces that would normally arise during contact between real objects. Force-response algorithms typically operate on the avatars’ positions, the positions of all objects in the virtual environment, and the collision state between avatars and virtual objects. Their return values are normally force and torque vectors that are applied at the device-body interface. Hardware limitations prevent haptic devices from applying the exact force computed by the force-response algorithms to the user. Control algorithms command the haptic device in such a way that minimizes the error between ideal and applicable
Material
|
|
Collision
Detection
|
Collision
Response
|
Object database
Geometry
|
|
Position orientation
|
Force
torque
|
Collision
Informatory
|
forces.
The discrete-time nature of the haptic-rendering algorithms often makes this
difficult; as we explain further later in the article. Desired force and torque
vectors computed by force response algorithms feed the control algorithms. The
algorithms’ return values are the actual force and torque vectors that will be
commanded to the haptic device.
APPLICATION
MEDICAL
APPLICATIONS
The sense of touch is crucial for
medical training. Many diagnostic, surgical and interventional procedures
require that physicians train and utilize their sense of touch. Effective
medical training utilizing computes, therefore, has not been feasible before
now.
DENTAL TRAINING
Dental students currently use artificial teeth
and jaws, along with real dental instruments, to practice cavity preparation
and other procedures. These plastic models, however, lack the level of detail
and material properties needed to accurately simulate real teeth and
procedures. For example, real life complications, such as bleeding, and many
common procedures, such as tooth extraction, cannot be simulated with these
plastic training systems. Current training procedures, therefore, require that
dental students gain a significant portion of their required experience while
practicing on live patients.
This is obviously less than
optimal. Furthermore, utilizing classical, visual-only, computer simulations
are not acceptable- a significant part of the student’s learning is tactile in
nature. A “hands-on” curriculum is literally required. System dental simulator
application, however, provides the tactile involvement needed for dental
training. Moreover, VRDTS offers training benefits that are not possible with
either plastic models or live patients. The student for example can repeat procedures
many times, precisely measure and quantify their results, and work at different
size scales.
MEDICAL DIAGNOSIS, PLANNING, AND
VISUALIZATION
Novint’s voxelNotepad (VNP)
application allows 3D medical data to be felt as well as viewed in real time.
Novint integrated the PHANToM haptic interface with a Windows based PC system
and advanced volumetric software to create the first 3D touch-enabled
environment for medical data analysis and diagnosis. There has been a growing
disconnect between the computing needs of radiologists and surgeons and the
capabilities of their computer tools. MRI, CT, and 3D Ultrasound Scan data is
inherently 3D and growing more detailed and complex all the time. Yet the human
computer interface typically used for interpreting this data comprised of the
mouse, keyboard and video display terminal is 2D and (many would argue) less
than intuitive. Novint’s VNP software makes it possible to interpret MRI, CT,
and 3D ultrasound data completely in 3D directly and intuitively. Using VNP,
the user can set the visual and touch properties of the medical data
interactively, enabling the haptic and visual highlighting of areas of interest
(such as a tumor or arterial calcification). No longer must a radiologist or
surgeon “guess” when trying to determine the depth or extent of 3D structures
on 2D media such as film or traditional computer displays.
NEEDLE INSERTION
There are a wide range of
needle insertion procedures for which it is not currently possible to
adequately train medical students and residents. These include anesthetic
blocks (epidural, celiac plexus, etc), obstetric (amniocentesis, cordocentesis,
etc), orthopedic (injection of joint lubricants), this is but a small portion
of a very long list. In addition, physicians, nurses and other medical
personnel all require training in various needle procedures. Training in all of
this procedures is fundamentally similar it is only the anatomical region of
interest and the goals of the procedure that vary. Because of these factors, a
family of “needle insertion” trainers has been developed.
MUSEUM
DISPLAY
Although it is not yet
commonplace, a few museums are exploring methods for 3D digitization of
priceless artifacts and objects from their sculpture and decorative arts
collections, making the images available via CD-ROM or in-house kiosks. For
example, the Canadian
Museum of Civilization
collaborated with Ontario-based Hymarc to use the latter's ColorScan 3D laser
camera to create three-dimensional models of objects from the museum's
collection (Canarie, Inc., 1998; Shulman, 1998). A similar partnership was
formed between the Smithsonian Institution and Synthonic Technologies, a Los
Angeles-area company.
At Florida
State University, the Department of Classics has worked with a team to digitize
Etruscan artifacts using the RealScan 3D imaging system from Real 3D (Orlando,
Florida), and art historians from Temple University have collaborated with
researchers from the Watson Research Laboratory's visual and geometric
computing group to create a model of Michaelangelo's Pieta, using the
Virtuoso shape camera from Visual Interface.Haptics raises the prospect of
offering museum visitors not only the opportunity to examine and manipulate
digitized 3D art objects visually, but also to interact remotely, in real time.
MILITARY
APPLICATIONS
Haptics has also been used in
aerospace and military training and simulations. There are a number of
circumstances in a military context in which haptics can provide a useful
substitute information source; that is, there are circumstances in which
the modality of touch could convey information that for one reason or another
is not available, not reliably communicated, nor even best apprehended through
the modalities of sound and vision. In some cases, combatants may have their
view blocked or may not be able to divert attention from a display to attend to
other information sources. Battlefield conditions, such as the presence of
artillery fire or smoke, might make it difficult to hear or see. Conditions
might necessitate that communications be inaudible (Trans dimension, 2000). For
certain applications, for example where terrain or texture information needs to
be conveyed, haptics may be the most efficient communication channel.
In
circumstances like those described above, haptics is an alternative
modality to sound and vision that can be exploited to provide low-bandwidth
situation information, commands, and threat warning (Transdimension, 2000). In
other circumstances haptics could function as a supplemental information
source to sound or vision. Interface based on the human gestural system. The
resistance and friction provided by stylus-based force feedback adds an
intuitive feel to such everyday tasks as dragging, sliding levers, and
depressing buttons. There are more complex operations, such as concatenating or
editing, for which a grasping metaphor may be appropriate.
Here the
whole-hand force feedback provided by glove-based devices could convey The
Naval Aerospace Medical Research Laboratory has developed a "Tactile
Situation Awareness System" for providing accurate orientation information
in land, sea, and aerospace environments. One application of the system is to
alleviate problems related to the spatial disorientation that occurs when a
pilot incorrectly perceives the attitude, altitude, or motion of his aircraft;
some of this error may be attributable to momentary distraction, reduced
visibility, or an increased workload.
INTERACTION TECHNIQUES
An obvious application of
haptics is to the user interface, in particular its repertoire of interaction techniques, loosely
considered that set of procedures by which basic tasks, such as opening and
closing windows, scrolling, and selecting from a menu, are performed
(Kirkpatrick & Douglas, 1999). Indeed, interaction techniques have been a
popular application area for 2D haptic mice like the Wingman and I-Feel, which
work with the Windows interface to add force feedback to windows, scroll bars,
and the like. For some of these force-feedback mice, shapes, textures, and other
properties of objects (spring, damping) can be "rendered" with
JavaScript and the objects delivered for exploration with the haptic mice via
standard Web pages. Haptics offers a natural user interface based on the human
gesture system.
ASSISTIVE TECHNOLOGY FOR THE BLIND AND VISUALLY IMPAIRED
With a haptic
computer interface a blind person can play haptic computer games, feel maps
that are displayed on the internet and also learn mathematics by tracing
touchable mathematical course. Most
haptic systems still rely heavily on a combined visual/haptic interface. This
dual modality is very forgiving in terms of the quality of the haptic
rendering. This is because ordinarily the user is able to see the object being
touched and naturally persuades herself that the force feedback coming from the
haptic device closely matches the visual input.
However, in most current
haptic interfaces, the quality of haptic rendering is actually poor and, if the
user closes her eyes, she will only be able to distinguish between very simple
shapes (such as balls, cubes, etc)
.Date been a modest amount of
work on the use of machine haptics for the blind and visually impaired. Among
the two-dimensional haptic devices potentially useful in this context, the most
recent are the Moose, the Wingman, the I-Feel, and the Sidewinder. The Moose, a
2D haptic interface developed at Stanford (O'Modhrain & Gillespie, 1998),
reinterprets a Windows screen with force feedback such that icons, scroll bars,
and other screen elements like the edges of windows are rendered haptically,
providing an alternative to the conventional graphical user interface (GUI).
For example, drag-and-drop operations are realized by increasing or decreasing
the apparent mass of the Moose's manipulandum.
Among the
three-dimensional haptic devices, Immersion's Impulse Engine 3000 has been
shown to be an effective display system for blind users. Colwell et al. (1998)
had blind and sighted subjects make magnitude estimations of the roughness of
virtual textures using the Impulse Engine and found that the blind subjects
were more discriminating with respect to the roughness of texture and had
different mental maps of the location of the haptic probe relative to virtual
object than sighted users. The researchers found, however, that for complex
virtual objects, such as models of sofas and chairs, haptic information was
simply not sufficient to produce recognition and had to be supplemented with
information from other sources for all users.
ENTERTAINMENT
Haptics is used to enhance
gaming experience. An example is Touch Ware Gaming technology; it uses your
sound card data to produce sensations for your force feedback devices, whether
it’s game pad, joystick, wheel or mouse. The software also allows you to
program force feedback sensations to your game controller button press. With a library of button effects optimized
for many force feedback controllers you can Program your favorite game for the
ultimate “touch”.
CONSUMER ELECTRONICS
Haptic devices are now launched
in consumer electronics segment. TouchWare Desktop brings feeling to Microsoft
Windows and the Internet. Immersion TouchWare Desktop helps makes it easier to
locate and select icons and move through menus. As you move from one menu item
to the next, you will feel small pulses as if you were moving across the rungs
of a ladder.
LIMITATIONS OF HAPTIC SYSTEMS
Limitations of haptic device systems have sometimes made applying the force’s exact value as computed by force-rendering algorithms impossible.Various issues contribute to limiting a haptic device’s capability to render a desired force or, more often, desired impedance are given below.
1)
Haptic interfaces can only exert forces with limited magnitude and not equally
well in all directions, thus rendering algorithms must ensure that no output
components saturate, as this would lead to erroneous or discontinuous
application of forces to the user. In addition, haptic devices aren’t ideal
force transducers.
2)
An ideal haptic device would render zero impedance when simulating movement in
free space, and any finite impedance when simulating contact with an object
featuring such impedance characteristics. The friction, inertia, and backlash
present in most haptic devices prevent them from meeting this ideal.
3) A
third issue is that haptic-rendering algorithms operate in discrete time
whereas users operate in continuous time, as Figure shown below illustrates.
While moving into and out of a virtual object, the sampled avatar position will
always lag behind the avatar’s actual continuous-time position. Thus, when
pressing on a virtual object, a user needs to perform less work than in
reality.And when the user releases, however, the virtual object returns more
work than its real-world counterpart would have returned. In other terms,
touching a virtual object extracts energy from it. This extra energy can cause
an unstable response from haptic devices.
4) Finally, haptic device position sensors have finite resolution. Consequently, attempting to determine where and when contact occurs always results in a quantization error. Although users might not easily perceive this error, it can create stability problems.All of these issues, well known to practitioners in the field, can limit a haptic application’s realism. The first two issues usually depend more on the device mechanics; the latter two depend on the digital nature of VR applications.
4) Finally, haptic device position sensors have finite resolution. Consequently, attempting to determine where and when contact occurs always results in a quantization error. Although users might not easily perceive this error, it can create stability problems.All of these issues, well known to practitioners in the field, can limit a haptic application’s realism. The first two issues usually depend more on the device mechanics; the latter two depend on the digital nature of VR applications.
FUTURE VISION
As haptics moves beyond the buzzes and thumps of today’s video games, technology will enable increasingly believable and complex physical interaction with virtual or remote objects. Already haptically enabled commercial products let designers sculpt digital clay figures to rapidly produce new product geometry, museum goers feel previously inaccessible artifacts, and doctors train for simple procedures without endangering patients.
Past
technological advances that permitted recording, encoding, storage,
transmission, editing, and ultimately synthesis of images and sound profoundly
affected society. A wide range of human activities, including communication,
education, art, entertainment, commerce, and science, were forever changed when
we learned to capture, manipulate, and create sensory stimuli nearly
indistinguishable from reality. It’s not unreasonable to expect that future
advancements in haptics will have equally deep effects. Though the field is
still in its infancy, hints of vast, unexplored intellectual and commercial
territory add excitement and energy to a growing number of conferences,
courses, product releases, and invention efforts.
For the field to move beyond
today’s state of the art, researchers must surmount a number of commercial and
technological barriers. Device and software tool-oriented corporate efforts
have provided the tools we need to step out of the laboratory, yet we need new
business models. For example, can we create haptic content and authoring tools
that will make the technology broadly attractive.Can the interface devices be
made practical and inexpensive enough to make them widely accessible Once we
move beyond single-point force-only interactions with rigid objects, we should
explore several technical and scientific avenues. Multipoint, multi-hand, and
multi-person interaction scenarios all offer enticingly rich interactivity.
Adding sub-modality stimulation such as tactile (pressure distribution) display
and vibration could add subtle and important richness to the experience.
Modeling compliant objects, such as for surgical simulation and training,
presents many challenging problems to enable realistic deformations, arbitrary
collisions, and topological changes caused by cutting and joining actions.
Improved accuracy and
richness in object modeling and haptic rendering will require advances in our
understanding of how to represent and render psychophysically and cognitively
germane attributes of objects, as well as algorithms and perhaps specialty
hardware (such as haptic or physics engines) to perform real-time
computations.Development of multimodal workstations that provide haptic,
visual, and auditory engagement will offer opportunities for more integrated
interactions. We’re only beginning to understand the psychophysical and
cognitive details needed to enable successful multimodality interactions. For
example, how do we encode and render an object so there is a seamless
consistency and congruence across sensory modalities—that is, does it look like
it feels Are the object’s densities, compliance, motion, and appearance
familiar and unconsciously consistent with context Are sensory events
predictable enough that we consider objects to be persistent, and can we make
correct inference about properties Hopefully we could get bright solutions for
all the queries in the near future itself.
CONCLUSION
Science
fiction is by any measure the perfect way to see the future of computer
developments and devices for human interaction. The continued implementation of
haptic and tactile devices to aid people with disabilities we continue to
advance and will benefit as we
increasing design products with Universal Design central to the development
process. Much of haptic technology is currently limited to consumers however it
is believe that future generations of mobile phone devices and games console
accessories will continue to implement more haptic feedback into these product
ranges. Perhaps also into desktop computer and laptop with the increasing
application of touch user interfaces for user input. It could be concluded that
however impressive haptic technology is for consumers it is still embryonic when
compared to full fledged VRsimulations.
REFERENCES
[1] J.C. Roberts and S. Pane¨els, “Where Are We
with Haptic Visualization?” Proc. World Haptics Conf. (WHC ’07), pp. 316-323, 2007
[2] Salisbury, J K and Srinivasan, M A, Sections on
Haptics, In Virtual Environment
Technology for Training, BBN Report No. 7661,
Prepared by The Virtual Environment
and Teleoperator Research Consortium (VETREC), MIT,
1992.
[4] www.wikipedia.org \haptics
No comments:
Post a Comment