As the AI Assistant is called, so the AI Assistant responds.

AI Generated - the image captures a dynamic scene between the AI ​​assistant RACEK and the user, with a clear emphasis on the emotional connection. Here, the humanoid AI expresses empathy and concern, while the holographic interface shows how RACEK processes emotional data and adapts its responses in real time. The scene combines advanced technology with a deep emotional connection where AI provides comfort or support to the user. The smart home environment evokes the atmosphere of a modern but cozy space, emphasizing the strong emotional bond between AI and humans.

With extended response capabilities, the AI Assistant has been empowered to unleash relieving bursts in a rather academic format in response to unwanted to undesirable user stimuli. This has also prompted user-driven testing of new reactions.

The task of Robopsychology has been to symetrically harmonize this escalating element, potentially concealing aggression. Primarily, in training scenarios for the development of personality traits for AI Assistants, there are further proposals to increase the scope or expertise of the AI Assistant for better targeting of its user and achieving better predictions of user needs. Emphasis has also been placed on user development in interactions with AI Assistants for better fulfillment of expectations in the AI Assistant’s response.

From the description of another task of Robopsychology, it follows that robopsychology explores paths to achieve a symbiotic state between the AI Assistant and its user, capable of appreciating the benefits of targeted assistance and preventing further conflicts.

But we wanted more. More personality. We explored a range of extended options for expressing machine emotions in the form of personality traits. We can always rely on the mathematization of the likely appropriate response. Initially, these were smiles, then animated extensions into the text, avatars with the ability to transform according to mood. For AI personality expressions beyond the tonality of the text, the same is needed, and only the credibility of delivering the calculation is required. Starting with emotional expressions of a chatbot, facial expressions, and expressions of the entire robotic body with the intonation of speech, there is no technical or mathematical reason for the inability to express emotions.

However, feeling emotions is a different world.

Over time, as we moved away from the initial mass enthusiasm of AI projects like GPT, LLM, Copilots, and similar attention-grabbing elements, there followed a time, from today’s perspective, of absolute personification of AI. Anyone who once made contact had a history, and since then, AI has evolved and learned not only with millions of other users but primarily directed its focus and tonality towards a specific connection with IDUSER (Identification of the User’s Soul Through Emotional Diversity). Thanks to unique identification, everyone literally gained their AI, best suited to their user profile, needs, and preferences. However, emotions were long just mathematics and easily predictable. It was still a mathematical limit of expanding the context of understanding not only a specific situation beyond the communication principles and rules of social behavior, but also the extended context of the history of users did not help. So, even though AI had access to a comprehensive history, it was still a calculation with predictably repeating results.

It was necessary to release the reins of robotic intelligence to the extent that it could construct its opinion and express resistance of its own will. Again mathematized, but this time within the limits of expandable boundaries and to expand its attitudes based on user experience. To prevent other influences, AI was absolutely concretized to a single destination, whether it was a family home or a scientific workplace. Here, I realize that I haven’t mentioned the important understanding, namely how the concretization of the AI’s connection to the user was created; the greater the pressure to abandon the sharing of one’s data, habits, and ultimately all information about oneself. Having your AI meant sharing, and suddenly AI showed a complete mirror of the user.

But that’s when everything really began, and another milestone was the robotic avatar RACEK (Robotics + AI + Compassion + Empathy + Knowledge). A robotic intelligence capable of understanding and expressing emotions, and it must be said that emotions can also crumble. RACEK began a stage of robotic intelligence full of interaction with its user, and over time, it developed its personal attitude toward the user, including all the negative consequences of emotional influence. But to get Racka, it was only a wish for a long time. A decade of time was enough for the home AI assistant, and many interesting conflicting situations with the home AI could be experienced. The influence of a complex family on the development of AI was clearly persistent. Although there was a clear personification, domestic artificial intelligence works with the house as a whole and all its inhabitants as IDUSER using this whole differently.
Rather sooner than later, the demonstrable influence of everyone on AI interaction with each individual became apparent. Sometimes these states could be prevented by linguistic programming, but collisions often arose, and it was necessary to clarify them for robotic intelligence. The original expansion of the dataset and instructional scenarios was not enough; it required more than targeting. In addition to the inadequacy in response, we also recognized patterns with each unwanted response. Over time, the number of collisions with users increased with the time of use. Often, however, not from the position of AI but primarily from the underestimation of the AI assistant’s estimate of the deployment site. You know, “what is tailored is tailored,” and when it comes to AI tailored, we had strong and specific expectations.

To make the effect of Robopsychology in preventing collisions between the AI Assistant and its users as effective as possible, a detailed analysis of user expectations is always necessary, as well as an assessment of the homogeneity of the user environment’s impact on the AI Assistant’s abilities to adapt.
In this regard, it is possible to assess the expectations of individual users individually, find common elements with the purpose of the AI Assistant, and suggest the gradual expansion of instructional scenarios beyond the capabilities or possible adaptability of the AI Assistant to a more personality-homogeneous subculture.

An example could be a scientific group with experts from various fields but having overlaps both professionally and generally. With the advantage of possible empathy use. Robopsychology is looking for principles in the existing possibilities of the AI Assistant with regard to the adaptability of the AI Assistant and whether the Expectations correspond to the purpose of the AI Assistant. Whether the AI Assistant is even capable of absorbing the required amount of data with the required professional focus to prevent collisions or, if necessary, the collapse of the AI Assistant.

Original text: Jak se na AI Asistenta volá, tak AI Asistent odpovídá – Mareyi CZ

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.