Presentation is loading. Please wait.

Presentation is loading. Please wait.

Input Devices, User Interface & VR Environments. Sri Kalyan Atluri (U89797193)

Similar presentations


Presentation on theme: "Input Devices, User Interface & VR Environments. Sri Kalyan Atluri (U89797193)"— Presentation transcript:

1 Input Devices, User Interface & VR Environments. Sri Kalyan Atluri (U89797193)

2 Input Devices Graphics Program require several kinds of input data. To make graphics packages independent of the particular hardware devices that are used, input functions can be designed according to the data description to be handled by each function. This leads to classification of logical input devices. The input devices are summarized into six logical device classifications: Locator  A device that specifies a coordinate position(x,y). Stroke  A device that specifies a series of coordinate positions. String  A device that specifies a text input. Valuator  A device that specifies scalar values. Choice  A device that specifies menu options. Pick  A device that specifies picture components.

3 Locator Devices: A most common way of selection of coordinate point is by positioning the screen cursor. For this we commonly use a mouse, joystick, trackball, spaceball, thumbwheels, dials, digitizer stylus and some other cursor – positioning devices. When the screen cursor is at the desired location, a button is activated to store the coordinates of that screen point.

4 Stroke Devices: These Logical devices are used to input sequence of coordinate positions. Stroke device input is equivalent to multiple calls to a locator device. The set of input points is often used to display line sections. A graphics tablet is more common stoke device which is used. Locators devices like mouse, joystick, etc work as stroke devices with their continuous input. These strok devices are mainly used in the paint brush systems.

5 Choice Devices: Graphics packages use menus to select programming options, parameter values and object shapes to be used in constructing a picture. A choice device is the one that enters selection from a list of alternatives. A function keyboard or a button box are designed as a stand-alone choice device units. Commonly a set of buttons or a cursor pointing device as a choice device. When a coordinate (x,y) is picked it is compared with vertical and horizontal boundaries at the coordinate values x min, x max y min and y max and then the option is selected if the (x,y) satisfy following inequalities: x min <= x <= x max y min <= y <= y max

6 Picking Device: Pick devices are used to select parts of a scene that are to be transformed or edited in some way. The cursor – positioning devices are also used as the picking devices. With a mouse or a joystick, we can position the cursor over the primitives in a displayed structure and press the selection button. This position of cursor is recorded and several levels of search. For the cursor coordinates (x,y) and the line end points (x 1,y 1 ) and (x 2, y 2 ) the D x = x 2 – x 1 and D y = y 2 – y 1 are calculated for identification schema.

7 Pick Correlation When a user presses the locator button, the application must determine exactly what screen button, icon or object was selected, based on this it can respond appropriately. This determination is called pick correlation which is a fundamental part of interactive graphics. Method of pick correlation: An application using SRGP performs pick correlation by determining the point where the cursor is located and the object which user is selecting. This pick correlation is done by simple and frequently used Boolean functions. The GEOM packages with SRGP includes all the functions and utilities for this pick correlation.

8 Picking Operation Picking and Creating an Object : Based on picking up a choice from pull down menu: When the user picks a header from pull down menu and selects an entry on the menu the application creates an object and assigns an unique positive integer identifier (ID) is assigned. This ID is returned to the pick correlation function for further processing of the object.

9 Pick Correlation Function:

10 Input Modes Input devices contain trigger that can be used to send signal to OS. For example button on a mouse. When triggered an input device sends the information about the position of the cursor which is helpful for the processing. The Input Modes are classified based on the way event is triggered: Request Mode Sample Mode Event Mode

11 Request Mode: Input provided to program only when user triggers the device Application requests input from device Graphics package returns control after user “triggers” One device triggered at a time

12 Sample Mode: Application polls all devices Input at other devices can be missed while processing particular one. Measure is returned immediately after the function is called in the user program (device sample). No trigger needed. Useful in apps where the program guides the use.

13 Event Mode: Input is accepted asynchronously from different devices at once. Event queue may contain contradicting events. Each trigger generates an event whose measure is put in an event queue which can be examined by the user program.

14 Interactive Picture Generation There are several techniques that are incorporated into graphics packages to aid the interactive construction of pictures. Various input options can be provided, so that coordinate information is entered. Several techniques are developed to make user experience more easier in construction of graphics. 1.Basic Positioning Methods. 2.Constraints. 3.Grids. 4.Gravity Fields. 5.Rubber-Band Methods. 6.Sketching. 7.Dragging.

15 Position with Feedback: The cursor is moved to the desired location. A button is pressed to fix the object at this location.

16 Rubber Band Methods: Used to construct and position straight lines

17 Used to construct Circular arcs:

18 Used to Scale Objects:

19 Used to distort objects by allowing only the line segments attached to a single vertex to change:

20 Sketching: Uses rubber-band methods to create objects consisting of connected line segments.

21 Virtual Reality Environments Virtual Reality is an interactive immersive 3D computer generated simulation, designed to make the user believe, to the greatest extent possible, that they are actually experiencing a real environment. Key properties of VR: Degree of immersion Degree of Presence Suspension of disbelief

22 Components of Virtual Reality: Database of 3D objects. Textures, images, video, sounds. Position trackers, gesture recognition (Kinect), haptics (touch), stereo viewing, joysticks, treadmills, gloves, eye tracking, other hardware. Interaction with objects, avatars; navigation; animation; speech. Can be one person or many. Computer monitor or large room.

23 VR Equipment Helmet Gloves Bodysuit Examples of VR : Games Holiday tours Combat situations Historical recreations

24 Helmet and Glove Helmet sometimes called a Head Mounted Device - allows 3D visual hearing Glove for sensations of touch to allow the control of the model

25 Finger sensor: A device that goes on the end of the finger and senses simulated movement

26 Pilot cockpit training Advantages Eliminates the danger Less expensive than using a real plane Helps give the pilot experience of a real environment Disadvantages The training can only be as good as the program

27 Gaming: Advantages Can add to the gaming experience such as having animated people with various attributes Visual, sound and sensation effects can be added Disadvantages The person can start believing in the unreal The equipment to play the games is sophisticated

28 Methodologies in VR: 1)Simulation Based VR 2)Avatar image-based VR 3)Projector-based VR 4)Desktop-based VR 5)True Immersive Virtual Reality

29 User Interfaces The user interface is one of the most important parts of any program because it is the only connection between the system and the user. There are basically two Types of interfaces: Character-Based Interfaces 2D Graphical User Interfaces Character-Based Interfaces: CBI is typically represented as an array of boxes, each of which can hold only one character. The basic input device on most small computer systems is a keyboard. As characters are typed, they are stored in the memory and copied to a basic output device, a display screen. Many character-based interfaces include some features of GUIs, such as menus.

30 2D Graphical User Interfaces: GUIs is that the screen is divided into a grid of picture elements GUIs also uses a keyboard as an input device and a display as an output device A GUI often includes well-defined standard formats for representing text and graphics, thus making it possible for different programs using a common GUI to share data. Similarly for the 3D objects the interaction is done by 3D Graphical User Interfaces. These 3D interfaces i.e. interfaces with all its components in a 3D environment have not yet had any major impact outside the laboratory.

31 3D GUIs are used to Accomplish the Virtual Environment: These 3D GUIs receives informationin two dimensions but the brain translates these cues into three-dimensional perceptions. It does this by using both monocular cues (which require only one eye) and binocular cues (which require both eyes) Monocular Cues: Used to create the illusion of depth on a normal two-dimensional display. These depth effects are experienced just as powerful with one eye closed as when both eyes are used. Finally just the effective use of patterns of light and shadow can provide important depth information.

32 Summary Of the Monocular Depth Cues:

33 Binocular Depth Cues: The appearance of depth that results from the use of both Linear Perspective Texture eyes is called stereopsis, it arises when the brain combines the two slightly different images it receives from each eye into a single image with depth. The effect of stereopsis can be achieved artificially by using devices like Head Mounted Displays or shutter glasses As opposed to monocular depth cues 10 % of people can not interpret stereopsis, i.e. they can not detect binocular depth cues, thus making them ‘stereoblind’.

34 Interface components: Third dimension using mostly plain 2D interface components that are mapped to rotation and translation of the 3D objects. The mouse movement is then mapped to rotation/translation and a selection mechanism is used for selecting which degrees-of-freedom the device is controlling. Selective Dynamic Manipulation is a paradigm for interacting with objects in visualizations.

35 References https://en.wikipedia.org/wiki/Input_device http://cs.boisestate.edu/~tcole/cs341/fall04/vg/1intro.pdf http://www.aw.com/info/angel/computergraphics/ch3.pdf http://bengal.missouri.edu/~duanye/cs4610-s2014/lecture-notes/3-07- Input%20and%20Interaction.ppt https://www.umass.edu/researchnext/inpictures/visual-effect-developing- interactive-next-generation-computer-graphics http://graphics.cs.cmu.edu/nsp/course/15-462/Spring04/slides/23_vr.pdf https://en.wikipedia.org/wiki/Methods_of_virtual_reality https://graphics.stanford.edu/~dk/papers/ve-travel-methodology.pdf http://courses.cs.vt.edu/~cs4204/lectures/virtual_environments.pdf

36 Thank You.


Download ppt "Input Devices, User Interface & VR Environments. Sri Kalyan Atluri (U89797193)"

Similar presentations


Ads by Google