IRT group @ AI Institute of South Carolina’s cover photo
IRT group @ AI Institute of South Carolina

IRT group @ AI Institute of South Carolina

Software Development

Columbia, South Carolina 15,201 followers

Intelligent, Robust & Trustworthy AI: foundational real-world impact-driven translational AI research http://aiisc.ai

About us

The Artificial Intelligence Institute of South Carolina (#AIISC, http://aiisc.ai) conducts core AI research combined with AI applications and impact through extensive interdisciplinary collaborations with most colleges across the university. Led by Prof. Amit Sheth, a researcher, educator, and entrepreneur with a proven leadership record of creating world-class research centers and funding record (> $33 million), the center strives to leverage the comprehensive nature of the university to advance state-of-the-art AI applications in fields like digital and public health, engineering including manufacturing, social sciences, education, communications, autonomous transportation, and personalized security and comfort, while also helping shape and inform the ethics and policies surrounding these emergent solutions.

Website
http://aiisc.ai
Industry
Software Development
Company size
51-200 employees
Headquarters
Columbia, South Carolina
Type
Educational
Founded
2019
Specialties
Semantic, Sensor & Social Web, IoT/WoT, Big & Smart Data, Semantic, Cognitive & Perceptual Computing, Personalized Digital Health, Knowledge-infused Learning, Artificial Intelligence, Natural Language Processing, Deep Learning, Knowledge Graphs and Ontologies, conversational AI, AI and games, Collaborative agents, and Neurosymbolic AI

Locations

  • Primary

    1112 Greene St

    5th Floor

    Columbia, South Carolina 29208, US

    Get directions

Employees at IRT group @ AI Institute of South Carolina

Updates

  • One for the record: https://lnkd.in/eVGG6mPD

    View profile for Amit Sheth

    The Dissertation Defense of Joey Yip will be remembered for a long time . My 40th Ph.D. student to successfully defended. A sense in the committee that this was worth two dissertations. 50+ online attendees. A comprehensive list of contributions and achievements (see slide). Development of the most comprehensive KG platform using a neurosymbolic approach today (according to Gemini), with good user stats. Product quality, scalable, robust, in real-world use. Joey is hired by a very good company in the same role (KG development) as his deep expertise. Can't ask for more! Video: https://lnkd.in/e9EpZB5u Slides: https://lnkd.in/eVRv3H8G Photos: https://lnkd.in/ehEN_yVJ

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • Tomorrow's event (9:30 am EDT/7:00 pm IST) is a lot more than a dissertation defense: here is why: https://lnkd.in/eCeK27VW

    Gemini, responding to "popular KG development platform", says,"#NeurosymbolicAI: Modern platforms like #EMPWR are combining symbolic AI with data-driven learning ...". Tomorrow, Joey Yip, who has been central to our R&D in #KnowledgeGraphs, associated robust platform EMPWR (see the bullet list in the slide for the limitations of the SOTA addressed) and my 40th Ph.D. student who will defend tomorrow. If interested in the topic, join online (9:30 am EDT/7:00 pm IST 20 Apr): https://lnkd.in/e978k9EX. Or study his work in depth at: https://lnkd.in/e2RSXDHF ps: Our major work in #KG goes back to the first commercial KG-driven semantic search engine developed at my second startup (and first in the three AI product startups based on research in my university lab), Taalee/Semagix, in the year 2000: https://lnkd.in/eWUkf4K

    • No alternative text description for this image
  • If you have anything to do with #KnowledgeGraphs, consider not missing this event. You will learn about #Neurosymbolic framework and techniques for creating, maintaining, and using large #KGs for demanding real-world applications. This research has been embodied in our widely used, commercial-grade EMPWR platform. [Abstract in the event description. #EMPWR: https://lnkd.in/eGEygnZX #KGs @AIISC: https://lnkd.in/eBeruEBM Underlying #NeSy research and technologies: https://lnkd.in/e-S6vizX

  • Accepted at 𝗔𝗖𝗠 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀 𝗼𝗻 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 𝗳𝗼𝗿 𝗛𝗲𝗮𝗹𝘁𝗵𝗰𝗮𝗿𝗲(𝗛𝗘𝗔𝗟𝗧𝗛) 𝟮𝟬𝟮𝟲 "𝗘𝘅𝗽𝗹𝗼𝗿𝗶𝗻𝗴 𝗧𝗵𝗲 𝗣𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹 𝗼𝗳 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 𝗳𝗼𝗿 𝗔𝘀𝘀𝗶𝘀𝘁𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗠𝗲𝗻𝘁𝗮𝗹 𝗛𝗲𝗮𝗹𝘁𝗵 𝗗𝗶𝗮𝗴𝗻𝗼𝘀𝘁𝗶𝗰 𝗔𝘀𝘀𝗲𝘀𝘀𝗺𝗲𝗻𝘁𝘀" 𝗖𝗼𝗿𝗲 𝗜𝗱𝗲𝗮: To effectively support real-life medical diagnoses, Large Language Models (LLMs) must effectively adhere to established clinical assessment protocols. Current healthcare systems are currently strained by a high patient load and a shortage of providers. LLMs can help, but to gainfully leverage them, it is necessary to evaluate the degree to which they can follow standardized clinical assessment procedures. In this paper, we specifically examine the diagnostic assessment processes described in the 𝗣𝗮𝘁𝗶𝗲𝗻𝘁 𝗛𝗲𝗮𝗹𝘁𝗵 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝗻𝗮𝗶𝗿𝗲-𝟵 (PHQ-9) for Major Depressive Disorder and the 𝗚𝗲𝗻𝗲𝗿𝗮𝗹𝗶𝘇𝗲𝗱 𝗔𝗻𝘅𝗶𝗲𝘁𝘆 𝗗𝗶𝘀𝗼𝗿𝗱𝗲𝗿-𝟳 (GAD-7) for Generalized Anxiety Disorder respsectively. Standardized assessment protocols like the PHQ-9 and GAD-7 are indispensable for accurate and effective diagnoses and treatment plans in the mental health domain. Previous efforts often treat diagnostic assessment as a classification problem, which lacks the precision needed for robust clinical explanations. To address this gap, our work guides models to identify the specific text spans within patient posts that correspond directly to PHQ-9 symptoms. We then validate these model-generated annotations against the ground-truth dataset (PRIMATE dataset) verified by expert clinicians. We investigate various prompting and fine-tuning techniques to guide both proprietary and open-source LLMs in adhering to these clinical processes. The models evaluated include 𝘎𝘗𝘛-4𝘰, 𝘓𝘭𝘢𝘮𝘢-3.1-8𝘣, 𝘔𝘪𝘹𝘵𝘳𝘢𝘭-8𝘹7𝘣 and 𝘔𝘦𝘯𝘵𝘢𝘭𝘓𝘭𝘢𝘮𝘢. Our findings demonstrate that fine-tuning models onPHQ-9 improves their performance, allowing them to closely match the symptom annotations of expert clinicians. However, these models do not exhibit the same reasoning processes as clinicians. This underscores the need for strict guidance and further scrutiny before LLMs can be reliably utilized in mental healthcare assistance. 𝗣𝗮𝗽𝗲𝗿 𝗟𝗶𝗻𝗸: https://lnkd.in/eWF2mH8A Kaushik Roy Harshul Surana Darssan Eswaramoorthi Yuxin Zi Vedant Ritvik G Amit Sheth IRT group @ AI Institute of South Carolina Indian AI Research Organization (IAIRO)

  • See my comments after the original post.

    Today I finally got around to reading "Neurosymbolic AI - Why, What, and How" (arXiv:2305.00813) and what an eye-opener from Prof. Sheth, Kaushik Roy, and Manas Gaur! Basically, it clicked for me how neural networks crush perception stuff (like GPT-4 predicting words or AlphaFold folding proteins which is our "System 1" (fast thinking), but they fall short on real cognition (System 2: reasoning, analogies, planning). Symbolic knowledge like knowledge graphs make it explicit and traceable. The paper shows why we need this hybrid: not just smarter or nerdier AI, but explainable, safe systems for high-stakes stuff like healthcare or self-driving cars etc with actual audit trails instead of those vague post-hoc explanations. Their breakdown: "Lowering" crams symbols into neural nets (think K-Adapter embeddings or TDLR masks), while "Lifting" pulls neural outputs into symbols which are best being end-to-end PK-iL, which jumped mental health response accuracy to 70% from plain LLMs' 47%. KGs handling workflows and regulations to keep generative AI in check. Anyone playing with neurosymbolic stuff? What's your take? #NeurosymbolicAI #AI #ExplainableAI #KnowledgeGraphs

    • No alternative text description for this image
  • Join us for Vishal Pallagani’s Ph.D. Dissertation Defense, where he will present his research on how large language models can advance automated planning and enable more flexible, generalizable decision-making systems. His work develops a structured taxonomy of how language models are being used for planning, evaluates their capabilities for plan generation through both pretrained and fine-tuned models, and introduces neurosymbolic architectures that integrate learning with symbolic reasoning to overcome key limitations of current approaches. In addition, Vishal explores how compact foundation models can be trained from scratch using structured planning knowledge, enabling more efficient and specialized representations for plan generation and related tasks. He will also highlight real-world applications, including collaborative information-retrieval assistants and manufacturing replanning, demonstrating how learning-based planners can support practical decision-making. All are welcome to attend and support this milestone presentation.

  • Exciting topic- #Neurosymbolic #RL

    𝗔𝗰𝗰𝗲𝗽𝘁𝗲𝗱 𝗮𝘁 𝗔𝗔𝗔𝗜-𝗠𝗔𝗞𝗘 𝟮𝟬𝟮𝟲: “𝗧𝗼𝘄𝗮𝗿𝗱 𝗡𝗲𝘂𝗿𝗼𝘀𝘆𝗺𝗯𝗼𝗹𝗶𝗰 𝗥𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘃𝗶𝗮 𝗘𝗱𝗶𝘁𝗮𝗯𝗹𝗲 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀”. The core idea: 𝗻𝗼𝘁 𝗲𝘃𝗲𝗿𝘆 𝗿𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗮𝗱𝗮𝗽𝘁𝗮𝘁𝗶𝗼𝗻 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝘀𝗼𝗹𝘃𝗲𝗱 𝗯𝘆 𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴. In many real-world RL systems, the environment may stay mostly the same, but the 𝗰𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀 𝗼𝗿 𝗽𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀 𝗰𝗵𝗮𝗻𝗴𝗲. That could mean:  • a new safety rule,  • a tighter operational constraint,  • a new approval requirement,  • or a different preference among valid actions. In this paper, we argue that such updates should be handled by editing an external, human-readable specification rather than rewriting the policy through fine-tuning. Our proposed specification is an 𝗲𝗱𝗶𝘁𝗮𝗯𝗹𝗲 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗴𝗿𝗮𝗽𝗵 that represents:  • applicability rules,  • hard constraints,  • and soft preferences. This gives a simple but useful abstraction for adaptation:  • 𝗰𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁 𝗲𝗱𝗶𝘁𝘀 can immediately block invalid actions through shielding,  • 𝗽𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗲𝗱𝗶𝘁𝘀 can shift tradeoffs in a controlled way,  • and many changes become easier to inspect and attribute than opaque retraining. I think this is relevant across RL settings where behavior must adapt under changing rules, constraints, or tradeoffs: 𝗿𝗼𝗯𝗼𝘁𝗶𝗰𝘀, 𝗵𝗲𝗮𝗹𝘁𝗵𝗰𝗮𝗿𝗲 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝘀𝘂𝗽𝗽𝗼𝗿𝘁, 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻, 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝗶𝗮𝗹 𝘀𝗰𝗵𝗲𝗱𝘂𝗹𝗶𝗻𝗴, 𝗮𝘃𝗶𝗮𝘁𝗶𝗼𝗻/𝗱𝗿𝗼𝗻𝗲 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀, 𝗮𝗻𝗱 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻. The common pattern is not “new dynamics,” but 𝗻𝗲𝘄 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀. Why use a 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗴𝗿𝗮𝗽𝗵? Because latent conditioning alone does not give direct editability or enforcement, and natural language alone is too ambiguous for reliable execution. A KG provides a structured layer with 𝗲𝘅𝗽𝗹𝗶𝗰𝗶𝘁 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰𝘀, 𝗹𝗼𝗰𝗮𝗹 𝗲𝗱𝗶𝘁𝘀, 𝘁𝘆𝗽𝗲𝗱 𝘀𝗰𝗵𝗲𝗺𝗮𝘀, 𝗽𝗿𝗼𝘃𝗲𝗻𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗰𝗼𝗺𝗽𝗮𝘁𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝘄𝗶𝘁𝗵 𝗻𝗲𝘂𝗿𝗮𝗹 𝗽𝗼𝗹𝗶𝗰𝗶𝗲𝘀 More broadly, I see this as a step toward 𝗲𝗱𝗶𝘁-𝗯𝗮𝘀𝗲𝗱 𝗴𝗲𝗻𝗲𝗿𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗥𝗟: agents that remain competent across changing constraints and preferences, while reducing the need for full retraining. For many in-schema edits, especially constraint edits, adaptation can happen with zero or bounded gradient updates. 𝗟𝗶𝗻𝗸 - https://lnkd.in/grnkKcba Joey Yip, Amit Sheth IRT group @ AI Institute of South Carolina Indian AI Research Organization (IAIRO) #AAAI #AAAIMAKE #NeurosymbolicAI #ReinforcementLearning #KnowledgeGraphs #Robotics #TrustworthyAI #SafeAI #AgenticAI

  • If a student knows in and out of the transformer model, mamba or diffusion model, talk to me on LinkedIn ([email protected]). There is an exciting opportunity to work on some novel neurosymbolic algorithms. Virtual (with IRL group at AIISC) or in-person internship (in India at IAIRO) is possible. ========== Details++++++++++++++++++++ Read about IAIRO at http://iairo.ai https://lnkd.in/dathcTbR Learn more about research of my students in the USA at: https://lnkd.in/eHnvW4-W https://lnkd.in/dZFDXnzm https://lnkd.in/daTEwCJ Given I am at IAIRO in the GIFT city, I want to produce the same calibre of students. Only very ambitious students who want to work to make their ambition true need to consider. Both virtual internship (a combined team of my research students in the USA and at IAIRO in the GIFT city) and in-person in the GIFT city. 5 research interns and one postdoc have joined and more will. If you are very curious, talk to them - I can introduce if needed. The main project relates to  https://lnkd.in/eqvbkDMj Preferred: deep knowledge of transformer, mamba, or diffusion, knowledge graphs,... Ability to train model from scratch (eg. Rascka's training video), knowledge of or curious about neurosymbolic AI: https://lnkd.in/e-S6vizX

  • We are happy to share that our research group #AIISC (AI Institute, University of South Carolina) has six papers accepted across two leading venues in planning, neuro-symbolic AI, and knowledge-grounded intelligent systems. 𝗜𝗖𝗔𝗣𝗦 𝟮𝟬𝟮𝟲 – 𝟯𝟲𝘁𝗵 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴 𝗮𝗻𝗱 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗶𝗻𝗴 (𝗔𝗰𝗰𝗲𝗽𝘁𝗮𝗻𝗰𝗲 𝗥𝗮𝘁𝗲: 𝟮𝟮.𝟱%) – 𝗗𝘂𝗯𝗹𝗶𝗻, 𝗜𝗿𝗲𝗹𝗮𝗻𝗱 • Rating Composite AI Models for Robustness Through Probabilistic Planning Kausik Lakkaraju, Sunandita Patra, Parisa Zehtabi, Biplav Srivastava • On Sample-Efficient Generalized Planning via Learned Transition Models Nitin Gupta, Vishal Pallagani, Alex John Aydin, Biplav Srivastava Paper: https://lnkd.in/dDUqBXDN These works contribute to robustness evaluation of composite AI systems and to improving sample efficiency in generalized planning through learned transition dynamics. 𝗔𝗔𝗔𝗜 𝗦𝗽𝗿𝗶𝗻𝗴 𝗦𝘆𝗺𝗽𝗼𝘀𝗶𝘂𝗺 𝟮𝟬𝟮𝟲 – 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗮𝗻𝗱 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗳𝗼𝗿 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲-𝗚𝗿𝗼𝘂𝗻𝗱𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗔𝗴𝗲𝗻𝘁𝘀 (𝗠𝗔𝗞𝗘) – 𝗕𝘂𝗿𝗹𝗶𝗻𝗴𝗮𝗺𝗲, 𝗖𝗮𝗹𝗶𝗳𝗼𝗿𝗻𝗶𝗮 • SafeGenChat: A Neuro-Symbolic Approach to Dialogs for Trustworthy Information Retrieval on Sensitive Topics Alex John Aydin, Kausik Lakkaraju, Vishal Pallagani, Biplav Srivastava • maPO: An Ontology for Multi-Agent Path Finding and Its Usage for Explaining Planner Behaviour Bharath Chandra Muppasani, Ritirupa Dey, Biplav Srivastava, Vignesh Narayanan • CausalPulse: An Industrial-Grade Neurosymbolic Multi-Agent Copilot for Causal Diagnostics in Smart Manufacturing Chathurangi Shyalika, Utkarshani Jaimini, Cory Henson, Amit Sheth • Toward Neurosymbolic Reinforcement Learning via Editable Specifications Vedant K., Joey Yip, Amit Sheth Paper: https://lnkd.in/dpnySEVk Together, these papers span probabilistic and generalized planning, neuro-symbolic dialogue systems, explainable multi-agent reasoning, causal diagnostics for industry, and neurosymbolic reinforcement learning. Congratulations to all the authors and collaborators. These results reflect the group’s continued focus on combining 𝗽𝗹𝗮𝗻𝗻𝗶𝗻𝗴, 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗿𝗲𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻, 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴, 𝗮𝗻𝗱 𝗻𝗲𝘂𝗿𝗼-𝘀𝘆𝗺𝗯𝗼𝗹𝗶𝗰 𝗔𝗜 for building robust, trustworthy, and deployable intelligent systems. #AIISC #AI4Society #AutomatedPlanning #NeuroSymbolicAI #ICAPS2026 #AAAI #GeneralizedPlanning #TrustworthyAI #MultiAgentSystems

  • It was indeed an honor to be invited to the PM's Gala Dinner-- got a chance to meet again (except EAM and Mr. Khosla, whom I met for the first time): Hon. Ashwini Vaishav, beloved EAM Jaishankar, Guj CM Bhupendra Ptel, the only Turing awardee Raj Reddy, MeitY sec. S. Krishnan, HiEd sec. Vinit Jain, Zoho CEO Sridhar Vembu, VC Vinod Khosla, India AI CEO Abhishek Singh (the man behind coordinating the massive and impressive hashtag #ImpactAISummit), and colleagues/friends like P J Narayanan, Balaraman Ravindran Highlights: https://lnkd.in/gCERC86w full album: https://lnkd.in/gDgnbGwK

Similar pages

Browse jobs