Autonomous vehicles (AVs) hold immense promises in revolutionizing transportation, and their potential benefits extend to individuals with impairments, particularly those with vision and hearing impairments. However, ...Autonomous vehicles (AVs) hold immense promises in revolutionizing transportation, and their potential benefits extend to individuals with impairments, particularly those with vision and hearing impairments. However, the accommodation of these individuals in AVs requires developing advanced user interfaces. This paper describes an explorative study of a multimodal user interface for autonomous vehicles, specifically developed for passengers with sensory (vision and/or hearing) impairments. In a driving simulator, 32 volunteers with simulated sensory impairments, were exposed to multiple drives in an autonomous vehicle while freely interacting with standard and inclusive variants of the infotainment and navigation system interface. The two user interfaces differed in graphical layout and voice messages, which adopted inclusive design principles for the inclusive variant. Questionnaires and structured interviews were conducted to collect participants’ impressions. The data analysis reports positive user experiences, but also identifies technical challenges. Verified guidelines are provided for further development of inclusive user interface solutions.展开更多
Radiation-induced acoustic computed tomography(RACT)is an evolving biomedical imaging modality that aims to reconstruct the radiation energy deposition in tissues.Traditional backprojection(BP)reconstructions carry no...Radiation-induced acoustic computed tomography(RACT)is an evolving biomedical imaging modality that aims to reconstruct the radiation energy deposition in tissues.Traditional backprojection(BP)reconstructions carry noisy and limited-view artifacts.Model-based algorithms have been demonstrated to overcome the drawbacks of BPs.However,model-based algorithms are relatively more complex to develop and computationally demanding.Furthermore,while a plethora of novel algorithms has been developed over the past decade,most of these algorithms are either not accessible,readily available,or hard to implement for researchers who are not well versed in programming.We developed a user-friendly MATLAB-based graphical user interface(GUI;RACT2D)that facilitates back-projection and model-based image reconstructions for twodimensional RACT problems.We included numerical and experimental X-ray-induced acoustic datasets to demonstrate the capabilities of the GUI.The developed algorithms support parallel computing for evaluating reconstructions using the cores of the computer,thus further accelerating the reconstruction speed.We also share the MATLAB-based codes for evaluating RACT reconstructions,which users with MATLAB programming expertise can further modify to suit their needs.The shared GUI and codes can be of interest to researchers across the globe and assist them in e±cient evaluation of improved RACT reconstructions.展开更多
The User Interface Transition Diagram (UITD) is a formal modeling notation that simplifies the specification and design of user-system interactions. It is a valuable communication tool for technical and non-technical ...The User Interface Transition Diagram (UITD) is a formal modeling notation that simplifies the specification and design of user-system interactions. It is a valuable communication tool for technical and non-technical stakeholders during the requirements elicitation phase, as it provides a simple yet technically complete notation that is easy to understand. In this paper, we investigated the efficiency of creating UITDs using draw.io, a widely used diagramming software, compared to a dedicated UITD editor. We conducted a study to compare the time required to use each tool to complete the task of creating a medium size UITD, as well as the subjective ease of use and satisfaction of participants with the dedicated Editor. Our results show that the UITD editor is more efficient and preferred by participants, highlighting the importance of using specialized tools for creating formal models such as UITDs. The findings of this study have implications for software developers, designers, and other stakeholders involved in the specification and design of user-system interactions.展开更多
This paper describes the design and evaluation of a user interface for a remotely supervised autonomous agricultural sprayer. The interface was designed to help the remote supervisor to instruct the autonomous sprayer...This paper describes the design and evaluation of a user interface for a remotely supervised autonomous agricultural sprayer. The interface was designed to help the remote supervisor to instruct the autonomous sprayer to commence operation, monitor the status of the sprayer and its operation in the field, and intervene when needed (i.e., to stop or shut down). Design principles and guidelines were carefully selected to help develop a human-centered automation interface. Evaluation of the interface using a combination of heuristic, cognitive walkthrough, and user testing techniques revealed several strengths of the design as well as areas that needed further improvement. Overall, this paper provides guidelines that will assist other researchers to develop an ergonomic user interface for a fully autonomous agricultural machine.展开更多
This paper presents a two-agent framework to build a natural langua g e query interface for IC information system, focusing more on scope queries in a single English sentence. The first agent, parsing agent, syntact...This paper presents a two-agent framework to build a natural langua g e query interface for IC information system, focusing more on scope queries in a single English sentence. The first agent, parsing agent, syntactically p rocesses and semantically interprets natural language sentence to construct a fu zzy structured query language (SQL) statement. The second agent, defuzzif ying agent, defuzzifies the imprecise part of the fuzzy SQL statement into its e quivalent executable precise SQL statement based on fuzzy rules. The first agent can also actively ask the user some necessary questions when it manages to disa mbiguate the vague retrieval requirements. The adaptive defuzzification approach employed in the defuzzifying agent is discussed in detail. A prototype interface has been implemented to demonstrate the effectiveness.展开更多
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing systems. Each context-aware application has its own set of behaviors to react to context modifications. Hence, every s...Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing systems. Each context-aware application has its own set of behaviors to react to context modifications. Hence, every software engineer needs to clearly understand the goal of the development and to categorize the context in the application. We incorporate context-based modifications into the appearance or the behavior of the interface, either at the design time or at the run time. In this paper, we present application behavior adaption to the context modification via a context-based user interface in a mobile application. We are interested in a context-based user interface in a mobile device that is automatically adapted based on the context information. We use the adaption tree, named in our methodology, to represent the adaption of mobile device user interface to various context information. The context includes the user’s domain information and dynamic environment changes. Each path in the adaption tree, from the root to the leaf, presents an adaption rule. An e-commerce application is chosen to illustrate our approach. This mobile application was developed based on the adaption tree in the Android platform. The automatic adaption to the context information has enhanced human-computer interactions.展开更多
In the last two decades, tangible user interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. TUIs show a potential to enhance the way in which people interact with d...In the last two decades, tangible user interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. TUIs show a potential to enhance the way in which people interact with digital information. First, this paper exam- ines the existing body of work on tangible user interfaces and discusses their application domains, especially information visualiza- tion. Then it provides a definition of intuitive use and reviews formerly separated ideas on physicality. As interaction has an impact on the overall product experience, we also discuss whether intuitive use influences the users' aesthetic judgements of such products.展开更多
As agricultural machines become more complex, it is increasingly critical that special attention be directed to the design of the user interface to ensure that the operator will have an adequate understanding of the s...As agricultural machines become more complex, it is increasingly critical that special attention be directed to the design of the user interface to ensure that the operator will have an adequate understanding of the status of the machine at all times. A user-centred design focus was employed to develop two conceptual designs (UCD1 & UCD2) for a user interface for an agricultural air seeder. The two concepts were compared against an existing user interface (baseline condition) using the metrics of situation awareness (Situation Awareness Global Assessment Technique), mental workload (Integrated Workload Scale), reaction time, and subjective feedback. There were no statistically significant differences among the three user interfaces based on the metric of situation awareness;however, UCD2 was deemed to be significantly better than either UCD1 or the baseline interface on the basis of mental workload, reaction time and subjective feedback. The research has demonstrated that a user-centred design focus will generate a better user interface for an agricultural machine.展开更多
A noncontact user interface using image processing for people with neuromuscular diseases is presented in this paper. The user interface is composed of a Web camera and a PC, and allows users to manipulate the PC usin...A noncontact user interface using image processing for people with neuromuscular diseases is presented in this paper. The user interface is composed of a Web camera and a PC, and allows users to manipulate the PC using small movements of single finger. By using image processing techniques with the Web camera, the finger is appropriately detected from the images captured by it. Control boxes for pointing and text input functions are also made. To verify performances of the interface, some tasks are experimentally performed by three able-bodied subjects and a person suffering from spinal muscular atrophy. It was clear from the experimental results that all the subjects could smoothly nerforrn the t,~k~展开更多
This paper deals with a problem of application generation together with their Graphic User Interface (GUI). Particularly, the source code generator based on dynamic frames was improved for more effective specificati...This paper deals with a problem of application generation together with their Graphic User Interface (GUI). Particularly, the source code generator based on dynamic frames was improved for more effective specification of GUI. It's too demanding for the developers to have specification of the application that contain all physical coordinates and other details of buttons and other GUI elements. The developed solution for this problem is based on post-processing of generated source code using iterators for specifying coordinates and other values of graphic elements. The paper includes two examples of generating web applications and their GUI.展开更多
The user interface is a central component of any mo de rn application program. It determines how well end users accept, learn, and effi ciently work with the application program. The user interface is very difficult t...The user interface is a central component of any mo de rn application program. It determines how well end users accept, learn, and effi ciently work with the application program. The user interface is very difficult to design, to implement, to modify. It takes approximately 70% of the time requ ired for designing an application program. All the existing tools for user interface design can be divided into two basic c ategories-Interface Builders and Model-based Interface development tools, whic h trace their roots from user interface management systems. Interface Builders a re the most widespread and excellent to create layouts and manipulate widgets. H owever, Interface Builders have the follow demerits. An interface designed using Interface Builders can contain hundreds of procedures. Interface Builders give us no possibility to develop different pieces of the same interface separately. They do not help us in managing user tasks and can be used only by programmers. Model-based interface development tools have attracted a high degree of interes t in last few years. The basic premise of model based technology is that the interface development can be fully supported by declarative models of all user interface characteristics such as their presen tation, dialogue, domain of application etc, and then the user interface develop ment can be centered around such models. The high potential of this technology has not been realized yet. This fact has the following reasons. The known interface models are partial representations of interfaces. They cannot be readily modified by developers, and are not publicly available to the HCI community. The central ingredient for success in model-ba sed systems is a declarative, complete, versatile interface model that can expre ss a wide variety of interface designs. Therefore tool developers have to avoid the following disadvantages of current interface models: inflexibility, system- dependence, and incompleteness. The main idea to achieve these model character istics mention above is to use ontologies. This broadened interest in ontologies is based on the fact that they provide ma chine-understandable representation of semantics for information, and a shared and common understanding of a domain that can be communicated between people and across application systems. Support in data, information, and knowledge exchang e becomes the key issue in current computer technology. At the moment we are on the brink of the second Web generation called Semantic Web or Knowledgeable Web. Given the increasing amount of information available on-line, this kind of sup port is becoming more important day by day. The main idea of the proposed approach is to replace interface models by appropr iate ontologies. Some parts of these ontologies will be available from the Inter net; the other parts will be built by developers. As a result of the Semantic We b development we will have increasing the number of ontologies formally describe d in the Internet. The terminology and content of these ontologies will be inter nationally standardized. Reusing these ontologies will bring down the cost of de velopment and improve the quality of user interface. The parts of a user interface model are-a domain ontology model, a dialog ontol ogy model, presentation ontology model, "business- logic" variable ontology mod el and correspondences between these parts. Thus, the user interface development based on ontologies is an evolution of th e model-based approach, where appropriate ontologies are used instead of models .展开更多
This paper describes an automated path generation method for industrial robots. Based on force control, a robotic subsystem has been developed for path automatic generation or path learning. Using a dummy tool and rou...This paper describes an automated path generation method for industrial robots. Based on force control, a robotic subsystem has been developed for path automatic generation or path learning. Using a dummy tool and roughly taught guiding points around a part contour, the robot moves in position and force controlled hybrid mode, following the order of the guiding points and with contact force direction and value predefined. During the motion, robot actual position is recorded by the robot controller. After the motion, the recorded position data is used to generate a robot path program automatically. Robot lead-through may be used in the guiding point teaching. Furthermore, a GUI (graphical user interface) is developed on the teach pedant to guide through the guiding point creation and teaching, path learning, program verification and execution. The development has been incorporated into a robotic machining product option. Combination of the robot path learning function and GUI enhances the interaction between the robot and operator and drastically increases the level of robotic ease-of-use.展开更多
In this paper, the design of a Graphical User Interface for CAN data frame monitoring is presented. The GUI has been developed in the Qt Creator IDE. A touch screen for visualization and control is used, which in turn...In this paper, the design of a Graphical User Interface for CAN data frame monitoring is presented. The GUI has been developed in the Qt Creator IDE. A touch screen for visualization and control is used, which in turn is controlled by a development board with a SoC Cyclone V, through which a Linux operating system is executed.展开更多
The user-computer interface for color selection is of great significance for the use of colors in computer graphics, particularly in some fields where the used colors have to be selected carefully. This paper discusse...The user-computer interface for color selection is of great significance for the use of colors in computer graphics, particularly in some fields where the used colors have to be selected carefully. This paper discusses what factors have to be considered to design an effective user interface for color selec- tion. It also presents a method which shows how to represent 3D color spaces according to the psychol- ogy of color perception. Two examples of user interface for color selection are given. One is based on the CLELUV uniform color space, the other is based on RGB-rotated color model.展开更多
In the last years, the types of devices used to access information systems have notably increased using different operating systems, screen sizes, interaction mechanisms, and software features. This device fragmentati...In the last years, the types of devices used to access information systems have notably increased using different operating systems, screen sizes, interaction mechanisms, and software features. This device fragmentation is an important issue to tackle when developing native mobile service front-end applications. To address this issue,we propose the generation of native user interfaces(UIs) by means of model transformations, following the modelbased user interface(MBUI) paradigm. The resulting MBUI framework, called LIZARD, generates applications for multiple target platforms. LIZARD allows the definition of applications at a high level of abstraction, and applies model transformations to generate the target native UI considering the specific features of target platforms. The generated applications follow the UI design guidelines and the architectural and design patterns specified by the corresponding operating system manufacturer. The objective is not to generate generic applications following the lowest-common-denominator approach, but to follow the particular guidelines specified for each target device. We present an example application modeled in LIZARD, generating different UIs for Windows Phone and two types of Android devices(smartphones and tablets).展开更多
To work efficiently with DSS, most users need assistance in representation conversion, i. e., translating the specific outcome from the DSS into the universal language of visual. In generally, it is much easier to und...To work efficiently with DSS, most users need assistance in representation conversion, i. e., translating the specific outcome from the DSS into the universal language of visual. In generally, it is much easier to understand the results from the DSS if they are translated into charts, maps, and other scientific displays, because visualization exploits human natural ability to recognize and understand visual pattern. In this paper we discuss the concept of visualization for DSS. AniGraftool, a software system, is introduced as an example of Visualized User Interface for DSS.展开更多
Repackaging brings serious threats to Android ecosystem.Software birthmark techniques are typically applied to detect repackaged apps.Birthmarks based on apps'runtime graphical user interfaces(GUI)are effective,es...Repackaging brings serious threats to Android ecosystem.Software birthmark techniques are typically applied to detect repackaged apps.Birthmarks based on apps'runtime graphical user interfaces(GUI)are effective,especially for obfuscated or encrypted apps.However,existing studies are time-consuming and not suitable for handling apps in large scale.In this paper,we propose an effective yet efficient dynamic GUI birthmark for Android apps.Briefly,we run an app with automatically generated GUI events and dump its layout after each event.We divide each dumped layout into a grid,count in each grid cell the vertices of boundary rectangles corresponding to widgets within the layout,and generate a feature vector to encode the layout.Similar layouts are merged at runtime,and finally we obtain a graph as the birthmark of the app.Given a pair of apps to be compared,we build a weighted bipartite graph from their birthmarks and apply a modified version of the maximum-weight-bipartite-matching algorithm to determine whether they form a repackaging pair(RP)or not.We implement the proposed technique in a prototype,GridDroid,and apply it to detect RPs in three datasets involving 527 apks.GridDroid reports only six false negatives and seven false positives,and it takes GridDroid merely 20 microseconds on average to compare a pair of birthmarks.展开更多
UniECAD is an integrated electronic CAD system, the user interfaCe develotwent system is the key of the integration of UniECAD. This paper presents the architecture of GUIDS, a graphical user interface development sys...UniECAD is an integrated electronic CAD system, the user interfaCe develotwent system is the key of the integration of UniECAD. This paper presents the architecture of GUIDS, a graphical user interface development system in UniECAD, and then discusses a series of new techniques and methods in the design and the implementation of this system around the following aspects: the editing environment of interface elements, the implementation of dialogue control and the automatic generation of interface code. As an example, the generation of the main interfaces of UniECAD shows the procedure of the development of user interfaces with this development system.展开更多
This paper mainly introduces a model of the Object-Oriented User Interface Gen-eration Tool (OOUIGT) and describes the procedure of the implementation. The oOUIGT (FasTool) has been implemented on SGI workstation. Thi...This paper mainly introduces a model of the Object-Oriented User Interface Gen-eration Tool (OOUIGT) and describes the procedure of the implementation. The oOUIGT (FasTool) has been implemented on SGI workstation. This enables the UI designing not to be a lower level programming procedure, but to be a higher level description procedure which is formal,convenient, comprehensive and fast.展开更多
This paper reports the utility of eye-gaze, voice and manual response in the design of multimodal user interface. A device- and application-independent user interface model (VisualMan) of 3D object selection and manip...This paper reports the utility of eye-gaze, voice and manual response in the design of multimodal user interface. A device- and application-independent user interface model (VisualMan) of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task. The multimodal inputs are integrated in the prototype interface based on the priority of modalities and interaction context. The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.展开更多
文摘Autonomous vehicles (AVs) hold immense promises in revolutionizing transportation, and their potential benefits extend to individuals with impairments, particularly those with vision and hearing impairments. However, the accommodation of these individuals in AVs requires developing advanced user interfaces. This paper describes an explorative study of a multimodal user interface for autonomous vehicles, specifically developed for passengers with sensory (vision and/or hearing) impairments. In a driving simulator, 32 volunteers with simulated sensory impairments, were exposed to multiple drives in an autonomous vehicle while freely interacting with standard and inclusive variants of the infotainment and navigation system interface. The two user interfaces differed in graphical layout and voice messages, which adopted inclusive design principles for the inclusive variant. Questionnaires and structured interviews were conducted to collect participants’ impressions. The data analysis reports positive user experiences, but also identifies technical challenges. Verified guidelines are provided for further development of inclusive user interface solutions.
基金supported by the National Institute of Health (R37CA240806)and American Cancer Society (133697-RSG-19-110-01-CCE)support from UCI Chao Family Comprehensive Cancer Center (P30CA062203).
文摘Radiation-induced acoustic computed tomography(RACT)is an evolving biomedical imaging modality that aims to reconstruct the radiation energy deposition in tissues.Traditional backprojection(BP)reconstructions carry noisy and limited-view artifacts.Model-based algorithms have been demonstrated to overcome the drawbacks of BPs.However,model-based algorithms are relatively more complex to develop and computationally demanding.Furthermore,while a plethora of novel algorithms has been developed over the past decade,most of these algorithms are either not accessible,readily available,or hard to implement for researchers who are not well versed in programming.We developed a user-friendly MATLAB-based graphical user interface(GUI;RACT2D)that facilitates back-projection and model-based image reconstructions for twodimensional RACT problems.We included numerical and experimental X-ray-induced acoustic datasets to demonstrate the capabilities of the GUI.The developed algorithms support parallel computing for evaluating reconstructions using the cores of the computer,thus further accelerating the reconstruction speed.We also share the MATLAB-based codes for evaluating RACT reconstructions,which users with MATLAB programming expertise can further modify to suit their needs.The shared GUI and codes can be of interest to researchers across the globe and assist them in e±cient evaluation of improved RACT reconstructions.
文摘The User Interface Transition Diagram (UITD) is a formal modeling notation that simplifies the specification and design of user-system interactions. It is a valuable communication tool for technical and non-technical stakeholders during the requirements elicitation phase, as it provides a simple yet technically complete notation that is easy to understand. In this paper, we investigated the efficiency of creating UITDs using draw.io, a widely used diagramming software, compared to a dedicated UITD editor. We conducted a study to compare the time required to use each tool to complete the task of creating a medium size UITD, as well as the subjective ease of use and satisfaction of participants with the dedicated Editor. Our results show that the UITD editor is more efficient and preferred by participants, highlighting the importance of using specialized tools for creating formal models such as UITDs. The findings of this study have implications for software developers, designers, and other stakeholders involved in the specification and design of user-system interactions.
文摘This paper describes the design and evaluation of a user interface for a remotely supervised autonomous agricultural sprayer. The interface was designed to help the remote supervisor to instruct the autonomous sprayer to commence operation, monitor the status of the sprayer and its operation in the field, and intervene when needed (i.e., to stop or shut down). Design principles and guidelines were carefully selected to help develop a human-centered automation interface. Evaluation of the interface using a combination of heuristic, cognitive walkthrough, and user testing techniques revealed several strengths of the design as well as areas that needed further improvement. Overall, this paper provides guidelines that will assist other researchers to develop an ergonomic user interface for a fully autonomous agricultural machine.
文摘This paper presents a two-agent framework to build a natural langua g e query interface for IC information system, focusing more on scope queries in a single English sentence. The first agent, parsing agent, syntactically p rocesses and semantically interprets natural language sentence to construct a fu zzy structured query language (SQL) statement. The second agent, defuzzif ying agent, defuzzifies the imprecise part of the fuzzy SQL statement into its e quivalent executable precise SQL statement based on fuzzy rules. The first agent can also actively ask the user some necessary questions when it manages to disa mbiguate the vague retrieval requirements. The adaptive defuzzification approach employed in the defuzzifying agent is discussed in detail. A prototype interface has been implemented to demonstrate the effectiveness.
文摘Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing systems. Each context-aware application has its own set of behaviors to react to context modifications. Hence, every software engineer needs to clearly understand the goal of the development and to categorize the context in the application. We incorporate context-based modifications into the appearance or the behavior of the interface, either at the design time or at the run time. In this paper, we present application behavior adaption to the context modification via a context-based user interface in a mobile application. We are interested in a context-based user interface in a mobile device that is automatically adapted based on the context information. We use the adaption tree, named in our methodology, to represent the adaption of mobile device user interface to various context information. The context includes the user’s domain information and dynamic environment changes. Each path in the adaption tree, from the root to the leaf, presents an adaption rule. An e-commerce application is chosen to illustrate our approach. This mobile application was developed based on the adaption tree in the Android platform. The automatic adaption to the context information has enhanced human-computer interactions.
文摘In the last two decades, tangible user interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. TUIs show a potential to enhance the way in which people interact with digital information. First, this paper exam- ines the existing body of work on tangible user interfaces and discusses their application domains, especially information visualiza- tion. Then it provides a definition of intuitive use and reviews formerly separated ideas on physicality. As interaction has an impact on the overall product experience, we also discuss whether intuitive use influences the users' aesthetic judgements of such products.
文摘As agricultural machines become more complex, it is increasingly critical that special attention be directed to the design of the user interface to ensure that the operator will have an adequate understanding of the status of the machine at all times. A user-centred design focus was employed to develop two conceptual designs (UCD1 & UCD2) for a user interface for an agricultural air seeder. The two concepts were compared against an existing user interface (baseline condition) using the metrics of situation awareness (Situation Awareness Global Assessment Technique), mental workload (Integrated Workload Scale), reaction time, and subjective feedback. There were no statistically significant differences among the three user interfaces based on the metric of situation awareness;however, UCD2 was deemed to be significantly better than either UCD1 or the baseline interface on the basis of mental workload, reaction time and subjective feedback. The research has demonstrated that a user-centred design focus will generate a better user interface for an agricultural machine.
文摘A noncontact user interface using image processing for people with neuromuscular diseases is presented in this paper. The user interface is composed of a Web camera and a PC, and allows users to manipulate the PC using small movements of single finger. By using image processing techniques with the Web camera, the finger is appropriately detected from the images captured by it. Control boxes for pointing and text input functions are also made. To verify performances of the interface, some tasks are experimentally performed by three able-bodied subjects and a person suffering from spinal muscular atrophy. It was clear from the experimental results that all the subjects could smoothly nerforrn the t,~k~
文摘This paper deals with a problem of application generation together with their Graphic User Interface (GUI). Particularly, the source code generator based on dynamic frames was improved for more effective specification of GUI. It's too demanding for the developers to have specification of the application that contain all physical coordinates and other details of buttons and other GUI elements. The developed solution for this problem is based on post-processing of generated source code using iterators for specifying coordinates and other values of graphic elements. The paper includes two examples of generating web applications and their GUI.
文摘The user interface is a central component of any mo de rn application program. It determines how well end users accept, learn, and effi ciently work with the application program. The user interface is very difficult to design, to implement, to modify. It takes approximately 70% of the time requ ired for designing an application program. All the existing tools for user interface design can be divided into two basic c ategories-Interface Builders and Model-based Interface development tools, whic h trace their roots from user interface management systems. Interface Builders a re the most widespread and excellent to create layouts and manipulate widgets. H owever, Interface Builders have the follow demerits. An interface designed using Interface Builders can contain hundreds of procedures. Interface Builders give us no possibility to develop different pieces of the same interface separately. They do not help us in managing user tasks and can be used only by programmers. Model-based interface development tools have attracted a high degree of interes t in last few years. The basic premise of model based technology is that the interface development can be fully supported by declarative models of all user interface characteristics such as their presen tation, dialogue, domain of application etc, and then the user interface develop ment can be centered around such models. The high potential of this technology has not been realized yet. This fact has the following reasons. The known interface models are partial representations of interfaces. They cannot be readily modified by developers, and are not publicly available to the HCI community. The central ingredient for success in model-ba sed systems is a declarative, complete, versatile interface model that can expre ss a wide variety of interface designs. Therefore tool developers have to avoid the following disadvantages of current interface models: inflexibility, system- dependence, and incompleteness. The main idea to achieve these model character istics mention above is to use ontologies. This broadened interest in ontologies is based on the fact that they provide ma chine-understandable representation of semantics for information, and a shared and common understanding of a domain that can be communicated between people and across application systems. Support in data, information, and knowledge exchang e becomes the key issue in current computer technology. At the moment we are on the brink of the second Web generation called Semantic Web or Knowledgeable Web. Given the increasing amount of information available on-line, this kind of sup port is becoming more important day by day. The main idea of the proposed approach is to replace interface models by appropr iate ontologies. Some parts of these ontologies will be available from the Inter net; the other parts will be built by developers. As a result of the Semantic We b development we will have increasing the number of ontologies formally describe d in the Internet. The terminology and content of these ontologies will be inter nationally standardized. Reusing these ontologies will bring down the cost of de velopment and improve the quality of user interface. The parts of a user interface model are-a domain ontology model, a dialog ontol ogy model, presentation ontology model, "business- logic" variable ontology mod el and correspondences between these parts. Thus, the user interface development based on ontologies is an evolution of th e model-based approach, where appropriate ontologies are used instead of models .
文摘This paper describes an automated path generation method for industrial robots. Based on force control, a robotic subsystem has been developed for path automatic generation or path learning. Using a dummy tool and roughly taught guiding points around a part contour, the robot moves in position and force controlled hybrid mode, following the order of the guiding points and with contact force direction and value predefined. During the motion, robot actual position is recorded by the robot controller. After the motion, the recorded position data is used to generate a robot path program automatically. Robot lead-through may be used in the guiding point teaching. Furthermore, a GUI (graphical user interface) is developed on the teach pedant to guide through the guiding point creation and teaching, path learning, program verification and execution. The development has been incorporated into a robotic machining product option. Combination of the robot path learning function and GUI enhances the interaction between the robot and operator and drastically increases the level of robotic ease-of-use.
文摘In this paper, the design of a Graphical User Interface for CAN data frame monitoring is presented. The GUI has been developed in the Qt Creator IDE. A touch screen for visualization and control is used, which in turn is controlled by a development board with a SoC Cyclone V, through which a Linux operating system is executed.
文摘The user-computer interface for color selection is of great significance for the use of colors in computer graphics, particularly in some fields where the used colors have to be selected carefully. This paper discusses what factors have to be considered to design an effective user interface for color selec- tion. It also presents a method which shows how to represent 3D color spaces according to the psychol- ogy of color perception. Two examples of user interface for color selection are given. One is based on the CLELUV uniform color space, the other is based on RGB-rotated color model.
基金Project supported by the European Commission’s FP7 Serenoa Project(No.258030)the National Program for Research,Development and Innovation,the Department of Science and Technology,Spain(No.TIN2011-25978)+1 种基金European Regional Development Funds(ERDF)European Union,and the Principality of Asturias,Science,Technology and Innovation Plan(No.GRUPIN14-100)
文摘In the last years, the types of devices used to access information systems have notably increased using different operating systems, screen sizes, interaction mechanisms, and software features. This device fragmentation is an important issue to tackle when developing native mobile service front-end applications. To address this issue,we propose the generation of native user interfaces(UIs) by means of model transformations, following the modelbased user interface(MBUI) paradigm. The resulting MBUI framework, called LIZARD, generates applications for multiple target platforms. LIZARD allows the definition of applications at a high level of abstraction, and applies model transformations to generate the target native UI considering the specific features of target platforms. The generated applications follow the UI design guidelines and the architectural and design patterns specified by the corresponding operating system manufacturer. The objective is not to generate generic applications following the lowest-common-denominator approach, but to follow the particular guidelines specified for each target device. We present an example application modeled in LIZARD, generating different UIs for Windows Phone and two types of Android devices(smartphones and tablets).
文摘To work efficiently with DSS, most users need assistance in representation conversion, i. e., translating the specific outcome from the DSS into the universal language of visual. In generally, it is much easier to understand the results from the DSS if they are translated into charts, maps, and other scientific displays, because visualization exploits human natural ability to recognize and understand visual pattern. In this paper we discuss the concept of visualization for DSS. AniGraftool, a software system, is introduced as an example of Visualized User Interface for DSS.
基金supported by the Leading-Edge Technology Program of Jiangsu Natural Science Foundation of China under Grant No.BK20202001the National Natural Science Foundation of China under Grant No.61932021.
文摘Repackaging brings serious threats to Android ecosystem.Software birthmark techniques are typically applied to detect repackaged apps.Birthmarks based on apps'runtime graphical user interfaces(GUI)are effective,especially for obfuscated or encrypted apps.However,existing studies are time-consuming and not suitable for handling apps in large scale.In this paper,we propose an effective yet efficient dynamic GUI birthmark for Android apps.Briefly,we run an app with automatically generated GUI events and dump its layout after each event.We divide each dumped layout into a grid,count in each grid cell the vertices of boundary rectangles corresponding to widgets within the layout,and generate a feature vector to encode the layout.Similar layouts are merged at runtime,and finally we obtain a graph as the birthmark of the app.Given a pair of apps to be compared,we build a weighted bipartite graph from their birthmarks and apply a modified version of the maximum-weight-bipartite-matching algorithm to determine whether they form a repackaging pair(RP)or not.We implement the proposed technique in a prototype,GridDroid,and apply it to detect RPs in three datasets involving 527 apks.GridDroid reports only six false negatives and seven false positives,and it takes GridDroid merely 20 microseconds on average to compare a pair of birthmarks.
文摘UniECAD is an integrated electronic CAD system, the user interfaCe develotwent system is the key of the integration of UniECAD. This paper presents the architecture of GUIDS, a graphical user interface development system in UniECAD, and then discusses a series of new techniques and methods in the design and the implementation of this system around the following aspects: the editing environment of interface elements, the implementation of dialogue control and the automatic generation of interface code. As an example, the generation of the main interfaces of UniECAD shows the procedure of the development of user interfaces with this development system.
文摘This paper mainly introduces a model of the Object-Oriented User Interface Gen-eration Tool (OOUIGT) and describes the procedure of the implementation. The oOUIGT (FasTool) has been implemented on SGI workstation. This enables the UI designing not to be a lower level programming procedure, but to be a higher level description procedure which is formal,convenient, comprehensive and fast.
文摘This paper reports the utility of eye-gaze, voice and manual response in the design of multimodal user interface. A device- and application-independent user interface model (VisualMan) of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task. The multimodal inputs are integrated in the prototype interface based on the priority of modalities and interaction context. The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.