Back

AI Agents for Software Development

Overcoming Barriers: How to Successfully Implement AI Agents in Software

  • Published:
  • Autor: [at] Editorial Team
  • Category: Basics
Table of Contents
    AI Agents in Software Development, kind of a grey and orange-ish jigsaw
    Alexander Thamm GmbH 2025, GenAI

    Large Language Models (LLMs) are becoming increasingly proficient at performing various tasks. They are so versatile that people use them for a wide range of applications, including task automation, internal chatbots, information extraction, information retrieval, and document summarization.

    Therefore, it’s no surprise that more and more companies are trying to leverage the power of LLMs to improve both of their operational efficiency and profitability. Interestingly, these LLMs can be used either as standalone entities or in collaboration with other systems to perform tasks, a concept commonly referred to as AI agents today.

    What are AI Agents in Software Engineering?

    AI agents are autonomous AI systems that can perform tasks, make decisions, and interact with their environment automatically with little to no human supervision. An AI agent can consist of a single AI technology or a collection of AI technologies and tools working together to perform specific tasks.

    The technologies behind an AI agent can include machine learning, computer vision, natural language processing (NLP), large language models (LLMs), robotics, search engines, rule-based scripts, and more. In terms of application, there are two types of AI agents: virtual and physical AI agents. Virtual AI agents operate in digital environments, such as software, while physical AI agents function in the real world, like robots. In this article, we’re going to focus on virtual AI agents and their applications in industrial software.

    Nowadays, many major AI tech companies have developed software that allows us to build or experience virtual AI agents more easily:

    • OpenAI released the OpenAI Agents SDK, making it easier for us to build multi-agent workflows that integrate LLMs and tool calling. It also includes a specialized tool call for transferring control between agents.
    • Anthropic introduced Computer Use, which enables users to leverage their flagship models, Claude Haiku and Sonnet, to interact with computers just like humans. The LLMs can perform tasks such as viewing the screen, moving the mouse cursor, clicking buttons, and even debugging code when needed.
    • Microsoft launched Copilot Studio, a platform designed to simplify AI agent development with an intuitive UI and little to no coding required. This allows anyone to create AI agents without extensive programming or technical expertise.

    How Do AI Agents Improve Industrial Software Solutions?

    The implementation of AI agents in industrial software has improved efficiency and profitability for companies across several sectors, such as government, healthcare, and e-commerce. There are several reasons why AI agents are highly useful for companies.

    First, AI agents are autonomous, which means they can perform tasks on their own with little to no human intervention. This ultimately leads to increased production efficiency and lower labor costs. This also means that human experts can shift their focus to more meaningful tasks that require complex decision-making.

    AI agents can also eliminate subjective bias associated with human decision-making. As humans, we naturally inherit certain internal biases: what we perceive as a good decision might not be viewed the same way by others. This can lead to inconsistencies on assessment quality and judgment. AI agents, on the other hand, make decisions based on real-time data, allowing them to provide consistent and fair judgments.

    Below, we’re sharing case studies from several companies that have successfully implemented AI agents to improve efficiency and profitability:

    Use Cases of AI Agents in Software Development

    In the previous section, we saw how AI agents have proven to be highly impactful in improving the efficiency and profitability of companies across different domains. Therefore, it’s natural to expect that more and more companies will start adopting AI agents in their daily operations.

    However, implementing AI agents is not as straightforward. Several factors must be considered to ensure a smooth deployment, such as the reliability, security, and interoperability of AI agents with existing systems. Beyond these technical aspects, we also need to ensure that AI agents comply with regulatory, social, and ethical standards. In this section, we’ll discuss everything that needs to be considered to successfully integrate AI agents into a company’s workflow.

    Regulatory, Social, and Ethical Aspects

    Transitioning a business workflow from fully manual human labor to an autonomous system powered by AI agents requires proper communication with both employees and customers. This is essential to address social and ethical concerns regarding AI agent implementation.

    To build trust among employees, it’s important to clearly communicate the reasons behind AI adoption and highlight how AI agents will improve workflow efficiency. Additionally, we must emphasize that AI agents are designed to augment, not replace human capabilities to address employees’ concerns about their job security.

    Aside from clear communication, proper employee training on working alongside AI agents is also crucial. This can be achieved through hands-on training, seminars, e-learning platforms, and other educational resources. Providing this training ensures that employees understand both the advantages and limitations of AI agents while equipping them with the necessary skills to continuously monitor AI performance.

    Meanwhile, to build trust among customers, transparency is key. A recent study conducted by Salesforce found that 72% of customers believe it’s important to know whether they are interacting with an AI agent. Therefore, if AI is being used in customer interactions, it should be clearly mentioned. Companies should inform customers about where and how AI is utilized in their services. If customer data is being used, it’s important to clarify how it is protected. Additionally, human support should always be available for complex requests that AI agents cannot handle.

    Regarding the regulatory aspects of AI agent implementation, data governance is a critical consideration. To ensure compliance, companies can take the following steps:

    • Validate Data Sources: Review the origins of datasets used to train AI agents and ensure they comply with legal standards and data protection laws such as GDPR.
    • Enforce Data Transparency: Clearly inform customers that they are interacting with AI and explain how their data is being used.
    • Stay Updated on Regulations: Ensure AI implementation complies with current legal frameworks, such as the EU AI Act and other regulatory measures.
    • Implement Data Protection: Use data security access like encryption and anonymization to protect sensitive data from illegal uses and breaches.

    Interoperability of AI Agents with Existing Industrial Software Solutions

    Once we have considered the regulatory, social, and ethical aspects, the next step is to think about the interoperability of AI agents. Successfully integrating AI agents involves more than just plugging in a new algorithm, it requires ensuring seamless communication between AI agents and existing software solutions.

    The challenge is that integrating AI agents into existing software systems is often not straightforward. It typically requires modifications or further adjustments to existing software systems, such as:

    Compatibility IssueDescriptionImpact
    Data FormatThe format of the data stored in the existing solutions is different than the data format expected by AI agentsAI agents might not be able to use the data or they might 
    generate incorrect predictions.
    Standardized APIsExisting systems don’t have standardized APIs for external integrations.AI agents might struggle to interface with existing systems,
    leading to delays in integration.
    Software ArchitectureExisting systems might have monolithic architectures that are not scalable and hard to modify.The integration of AI agents might require architectural changes 
    in the underlying software infrastructure, which can be costly and time-consuming.
    Hardware ResourcesExisting systems might not be designed to handle the data processing demands of AI agents.AI agents may not function efficiently, which lead to bottlenecks or system crashes.
    Security and ComplianceExisting systems might not have sufficient security measures in protecting the data.Potential misuse of AI agents might arise after integration.

    Therefore, to ensure the successful integration of AI agents, we first need to thoroughly assess the existing IT infrastructure. We should map out what is required to integrate AI agents and determine whether our current software systems can support those requirements.

    Next, we can start with a small-scale pilot project to test the integration of AI agents in a controlled environment. This allows us to monitor performance and identify any integration issues early on. Based on feedback from the pilot, we can fine-tune the integration and gradually scale it to cover broader industrial processes.

    Once the AI agents are successfully integrated into existing systems, we also need to continuously monitor their performance and ensure compliance with the regulatory, social, and ethical aspects mentioned in the previous section.

    Siemens is one of many companies that has successfully incorporated AI agents into its existing software systems. They integrated AI agents into their TIA Portal, Siemens’ own engineering platform for automating manufacturing processes. The AI agents can access production data via a standardized API. The fetched data is then analyzed, and based on the analysis, the agents adjust the production parameters accordingly.

    Reliability, Explainability and Security

    AI agents, just like any other AI-based systems, make autonomous decisions based on a probabilistic approach using real-time data provided to them. This means that AI agents are not immune to making inappropriate decisions that could potentially breach data protection laws. They also don’t have a conscience like humans do, making them prone to generating harmful or inappropriate content in response to malicious prompts, such as prompt injection and jailbreaking.

    Therefore, it’s important to implement guardrails as safety measures for AI agents. In a nutshell, guardrails consist of a set of rules designed to ensure that AI agents behave appropriately. These rules can include legal regulations, such as the EU AI Act, company internal policies, ethical guidelines, and more. There are several ways to implement guardrails on AI agents, such as:

    • Implementing content filtering: Use machine learning models or rule-based approaches to detect harmful content in user requests. If a request is flagged as harmful, it should not be processed by the AI agent.
    • Defining a clear set of rules: Establish explicit guidelines that define the limits and scope of an AI agent’s functions to ensure it doesn’t operate beyond its intended purpose. For example, a customer support chatbot should only provide responses based on the internal context/documents available to it.
    • Creating a continuous monitoring system and fallback mechanism: Implement a human-in-the-loop approach, where humans continuously monitor AI agents’ performance and intervene when necessary. A fallback mechanism also allows AI agents to redirect decisions to humans when handling high-stakes requests. As an example, if an AI-powered fraud detection system detects a potentially fraudulent transaction with low confidence, it should escalate the case to a human analyst for manual review before blocking the transaction.

    Conclusion

    AI agents are proving to be invaluable tools in industrial software to improve efficiency, productivity, and profitability. This is because AI agents offer autonomous decision-making capabilities that reduce the need for human intervention.

    However, it is important for companies to carefully consider several things before integrating AI agents into their existing systems, such as interoperability, regulatory, social, and ethical aspects. Also, ensuring the reliability and security of AI agents is essential, which can be done by implementing guardrails and continuous monitoring to prevent undesirable outcomes.

    Author

    [at] Editorial Team

    With extensive expertise in technology and science, our team of authors presents complex topics in a clear and understandable way. In their free time, they devote themselves to creative projects, explore new fields of knowledge and draw inspiration from research and culture.

    X

    Cookie Consent

    This website uses necessary cookies to ensure the operation of the website. An analysis of user behavior by third parties does not take place. Detailed information on the use of cookies can be found in our privacy policy.