Special Issue: Civilizing and Humanizing AI
Amitava Das1,3, Hemant Purohit2, Kaushik Roy1, and Amit Sheth1
1AI Institute, University of South Carolina; 2George Mason University; 3WIPRO
The emergence of large language/foundation models (LLMs/LFMs) such as GPT, Stable Diffusion, DALL-E, and Midjourney has dramatically altered the trajectory of progress in AI and its applications. The enthusiasm for AI has expanded beyond the realm of AI researchers and has reached the general population; indeed, it asserts we are living in an exciting time of scientific proliferation. The present-day capability of AI exhibits promising forms of intelligence on the spectrum of individualized to generalized intelligence, but it also possesses unexpected limitations and is susceptible to significant misuse. AI’s "eloquence” has reached a level where discerning AI-generated content for a human, be it in text, images, or videos, has become notably challenging. We refer to this as the "eloquence” characteristic.
Conversely, the worrisome rise of hallucinations of AI models raises credibility issues. We refer to this as "adversity" characteristics. Recently, the governments of the United States and the European Union have put forth their preliminary proposals concerning the regulatory framework for the safety of AI-powered systems. AI systems that adhere to these regulations in the future will be referred to by a recently coined term, “Constitutional AI”. The primary objective of regulatory frameworks is to establish safeguards against the misuse of AI systems. In the event of misuse, these frameworks aim to impose penalties on the individuals, groups, and/or organizations responsible for such misconduct. The effective implementation of these regulatory frameworks demands the design of processes and tools for civilizing and humanizing AI. "Civilizing AI" embodies a nuanced equilibrium between the machine's eloquence and its inclination towards adversarial behavior. Complementing it, “Humanizing AI” (borrowed in part from Humanity-inspired AI) embodies the characterization of human expectations for benefits and risks of adopting AI systems in society, given the machine’s eloquence and adversarial behavior. As AI systems increasingly take the place of a human (e.g., an autopilot driving a vehicle, a virtual assistant to diagnose or counsel a patient), humanizing AI aims to subject an AI system to the same behavior and expectations that we expect humans (e.g., driver, a health professional) to abide by. This includes subjecting an AI system to ethics, socio-cultural norms, policies, regulations, laws, and values in alignment with that expectation from such a human actor.
We seek articles that address the two themes; representative topics include:
● Methods and frameworks for civilizing and humanizing AI
● Identifying and managing AI’s risk to individuals and society
● Detection of AI-generated content
● Modeling ethics, biases, accountability, and autonomy in AI systems
● Learning and reasoning for social norms and values
● Making AI models responsible and accountable
● Mitigating harmful hallucinations
● Building guardrails based on policy, regulations, and laws
● Adapting pretrained LLMs for individual and social context
● Incorporating cognitive models in AI models
● Cultural biases and mitigation techniques for LLMs
Prospective authors can send an abstract to firstname.lastname@example.org
for feedback on the fit to this special issue.
All submissions must be original manuscripts of fewer than 5,000 words, focused on data economy and data marketplaces enabled by Internet technologies. All manuscripts are subject to peer review on both technical merit and relevance to IC’s international readership—primarily practicing engineers and academics who are looking for material that introduces new technology and broadens familiarity with current topics. We do not accept white papers, and papers which are primarily theoretical or mathematical must clearly relate the mathematical content to a real-life or engineering application. To submit a manuscript, please log on to ScholarOne (https://mc.manuscriptcentral.com:443/ic-cs) to create or access an account, which you can use to log on to IC’s Author Center and upload your submission.
Submission of the manuscript: 1 Mar 2024
Final Materials: 1 Aug 2024
Publication: September/October 2024