ABUSE AND MISUSE OF SOCIAL AGENTS 
				Special Issue of Interacting with Computers
						
				
				
				:: Introduction ::
				For decades, science fiction writers have envisioned a world 
				in which robots and computers act like human assistants, virtual 
				companions, and artificial helpmates. Nowadays, for better or 
				for worse, that vision is becoming reality. A number of 
				human-like interfaces and machines are under development in 
				research centers around the world, and several prototypes have 
				already been deployed on the Internet and in businesses. Even in 
				our homes, service robots, such as vacuum cleaners and lawn 
				movers, are becoming increasingly common. These creatures are 
				the first-generation social agents: machines designed to build 
				relationships with users while performing tasks with some degree 
				of autonomy. Social agents display a range of human-like 
				behaviors: they communicate using natural language, gesture, 
				display and recognize emotions, and are even designed to mirror 
				our facial expressions and show empathy.
				
				Until recently, scientific investigations into the psychological 
				aspects of human relationships with social agents have mainly 
				addressed the positive effects of this relationship, such as an 
				increase in trust and task facilitation (e.g., Bickmore & 
				Picard, 2005). Nevertheless, as the interaction bandwidth 
				evolves to encompass a broader range of social and emotional 
				expressiveness, there is the possibility of the user and social 
				agent displaying anti-social, hostile, and disinhibited 
				behaviors. Workshops held at Interact2005 and CHI2006 (De Angeli, 
				Brahnam, & Wallis, 2005; De Angeli, Brahnam, Wallis, & Dix, 
				2006) have suggested that anthropomorphic metaphors can 
				inadvertently rouse the user to display dissatisfaction through 
				angry interactions, sexual harassment, and volleys of verbal 
				abuse.
				
				At first glance, verbally or even physically abusing social 
				agents and service robots may not appear to pose much of a 
				problem—nothing that could be accurately labelled abuse since 
				computers and machines are not people and thus not capable of 
				being harmed. Nevertheless, the fact that abuse, or the threat 
				of it, is part of the interaction opens important moral, 
				ethical, and design issues. As machines begin to look and behave 
				more like people, it is important to ask how they should behave 
				when threatened and verbally and physically attacked. Another 
				concern is the potential that socially intelligent agents have 
				of taking advantage of users, especially children, who are prone 
				to attribute to these characters more warmth and human qualities 
				then they actually posses. Many parents, for instance, are 
				disturbed by the amount of information social agents are able to 
				obtain in their interactions with children. It is feared that 
				these relationship-building agents could be used as potent means 
				of marketing and advertising.
				
				For this special issue of Interacting with Computers, we are 
				soliciting papers from a range of disciplines (psychology, HCI, 
				robotics, and cultural studies) that address the negative side 
				of human-computer interaction. 
				
				:: Topics ::
				Papers on all aspects of the topic are welcome, but we are 
				particularly interested in papers that address the following 
				questions:
				* How does the misuse and abuse of social agents affect the 
				user’s computing experience?
				* How does disinhibition with social agents differ from Internet 
				disinhibition?
				* What are the psychological, sociological, and technological 
				factors involved in negative interactions between users and 
				social agents?
				* What design factors (e.g., embodiment, communication styles, 
				functionality) trigger or restrain disinhibited behaviours? 
				* What are the social and psychological consequences of negative 
				human-computer interactions?
				* How can we develop machines that learn to avoid user abuses?
				* How can agent technology be exploited to take advantage of 
				users and what ethical and design mechanisms can counteract this 
				possibility?
				* Is it appropriate for machines to ignore aggression? If 
				conversational agents do not acknowledge verbal abuse will this 
				only serve to aggravate the situation? 
				* If potential clients are abusing virtual business 
				representatives, then to what extent are they abusing the 
				businesses or the social groups the human-like interfaces 
				represent? 
				
				:: Guest Editors ::
				Antonella De Angeli (University of Manchester), UK
				Sheryl Brahnam (Missouri State University), US
				
				:: Submissions ::
				Contributions should not exceed 10,000 words and should 
				follow the
				
				IwC Guideliness for Authors. 
				
				Authors intending to submit a paper should send an Abstract to 
				the contact person below).
				
				All papers submitted will be double blind, peer reviewed.
				
				:: Important Dates ::
				March 20th: Abstract Submission
				April 5, 2007: Paper Submission
				May 25: Notification of acceptance
				June 15: Camera ready 
				Contact: Sheryl Brahnam sbrahnam: (*) facescience (*) org 
				(replace * with appropriate email symbols).