News

BMC Introduces Generative AI to Mainframes

BMC is enhancing mainframe management by introducing generative artificial intelligence (AI) assistants. One of their latest tools, available in beta, helps explain code functionality.

John McKenny, the senior vice president and general manager for Intelligent Z Optimization and Transformation at BMC, announced the launch of BMC Automated Mainframe Intelligence (AMI) DevX Code Insights. This tool, accessible via a chat interface, assists in debugging code written in various languages, understanding system processes, and making more informed decisions.

BMC is training large language models (LLMs) for this purpose. The AMI DevX Code Insights is just one of several AI agents BMC plans to offer through a unified console. These LLMs might be developed by BMC or based on third-party platforms, and organizations can even use their custom-built LLMs.

BMC is also inviting organizations to join a Design Program, granting access to new generative AI features as they develop. These AI agents will act like subject matter experts (SMEs) for specific tasks, offering more than just prompt-based question-answering. They will provide insights and guidance to streamline workflows.

Generative AI is a key part of BMC’s long-term strategy to simplify mainframe management over the next three decades. For example, developers will soon be able to use a service catalog to meet their requirements independently, reducing their reliance on IT operations teams.

The ultimate aim is to make mainframes as easy to manage as any distributed computing platform, requiring fewer specialized skills. While BMC is not alone in this endeavor, the rise of generative AI will significantly speed up the process.

This advancement is particularly important as more AI models are deployed on mainframe platforms, which already house vast amounts of data. It's generally more efficient to bring AI to existing data rather than moving data to new platforms.

IT teams may also need to reconsider which workloads run on which platforms. Although not all organizations use mainframes, those that do can lower overall IT costs by consolidating more workloads on their mainframes, thanks to specialized mainframe licenses.

In summary, as AI simplifies workload management regardless of location, IT teams will likely become more uniform over time.

OpenAI Launches SearchGPT: What It Can Do and How to Access It

OpenAI, known for its groundbreaking ChatGPT, which debuted in November 2022, is now venturing into the search engine arena with the launch of SearchGPT. This new AI-powered search tool is designed to provide real-time information and is currently available as a prototype.

What is SearchGPT?

SearchGPT allows users to enter queries like any typical search engine, but it stands out by delivering conversational responses that include up-to-date information sourced from the web. This approach offers a more interactive experience compared to traditional search engines.

Features of SearchGPT

Similar to the "Browse" feature in ChatGPT, SearchGPT includes links to the original sources of its information, allowing users to easily verify facts and delve deeper into topics. If users prefer a more traditional search results layout, they can click the "link" icon on the left sidebar to see a list of relevant webpages alongside the conversational response.



One of the key features of SearchGPT is its ability to handle follow-up questions, making it easier for users to refine their search without starting over. This prototype is currently being tested by around 10,000 users and publishers, with OpenAI gathering feedback to improve the service. Interested users can join a waitlist to try it out.

Support for Publishers

To address concerns that AI search engines might reduce traffic to publisher websites, OpenAI emphasizes that SearchGPT is designed to promote proper attribution and linking to original sources. Publishers can also control how their content appears in SearchGPT and opt-out of having their content used for training OpenAI's models while still appearing in search results.

The Future of SearchGPT and ChatGPT

OpenAI plans to integrate the best features of SearchGPT into ChatGPT, enhancing the chatbot's capabilities by combining conversational responses with search functionality. This could provide a compelling alternative to traditional search engines like Google, which currently holds a dominant 91% market share according to StatCounter.

Other companies, such as Microsoft with its Copilot in Bing and Perplexity, an AI-powered search engine, are also exploring the integration of generative AI into search. While these efforts have gained traction, with Bing reaching 140 million daily active users and Perplexity being valued at $1 billion, they have not yet posed a significant challenge to Google's dominance.

Google, meanwhile, continues to innovate in response to the growing interest in AI. The company introduced its Search Generative Experience (SGE) at Google I/O 2023, and expanded the use of AI-generated overviews in 2024, though it has had to adjust these features based on user feedback.

For now, OpenAI's SearchGPT is a promising addition to the evolving landscape of AI and search technology, offering a new way to access and interact with information online.

DevOps for Machine Learning and Artificial Intelligence

In today's tech world, DevOps is known for its ability to streamline development and operations. However, when it comes to machine learning (ML) and artificial intelligence (AI), traditional DevOps practices encounter unique challenges. Enter MLOps—a specialized approach that bridges the gap between data science, operations, and innovative AI applications. MLOps helps organizations efficiently develop, deploy, and manage ML and AI models, seamlessly integrating data-driven intelligence into their workflows.

Challenges in ML and AI Operations

Developing and deploying ML and AI models bring complexities that challenge traditional DevOps methods:

  1. Data Pipeline Complexity: ML and AI require intricate data preprocessing and management, making data pipelines critical yet challenging to handle.
  2. Model Versioning: Keeping track of multiple versions of models, their dependencies, and performance over time is essential for reproducibility and maintaining AI projects.
  3. Environment Consistency: Ensuring that development, testing, and production environments are consistent is crucial to prevent discrepancies in model behavior.
  4. Scalability and Performance: Scaling ML and AI models to handle production workloads while maintaining performance, especially for resource-intensive models, can be challenging.
  5. Monitoring and Ethical Governance: Real-time monitoring of model performance is vital. Ethical considerations, such as preventing misuse of AI-generated content, are also paramount.

The Role of MLOps in ML and AI

MLOps integrates ML systems into the broader DevOps workflow, uniting data science and operations teams to streamline the ML lifecycle:

  1. Collaboration Across Disciplines: AI projects often involve diverse teams, including data scientists, developers, and AI specialists. MLOps promotes seamless collaboration among these roles.
  2. Advanced Data Handling: AI works with various data types, including structured data, unstructured text, images, and multimedia. MLOps ensures these diverse data types are managed, of high quality, and readily available.
  3. Version Control: By applying version control practices similar to traditional DevOps, MLOps helps manage and track changes to code, data, and model artifacts.
  4. Continuous Integration and Deployment: CI/CD principles extend to AI, enabling automated testing, validation, and deployment of models.
  5. Automated Pipelines: Central to MLOps are automated ML pipelines, which allow organizations to automate data preprocessing, model training, evaluation, and deployment.
  6. Containerization and Orchestration: Tools like Docker and Kubernetes are used to package and deploy ML models consistently across environments.
  7. Explainable AI (XAI): MLOps incorporates XAI techniques to ensure transparency and interpretability of AI-driven decisions.
  8. Monitoring and Observability: Robust monitoring and observability solutions ensure ML models perform as expected in production, aiding in debugging and optimization.
  9. Governance and Compliance: MLOps emphasizes governance practices, ensuring ML models meet regulatory requirements and adhere to ethical standards.

Benefits of MLOps for ML and AI

Adopting MLOps in ML and AI projects offers numerous benefits:

  1. Accelerated AI Projects: MLOps streamlines the development and deployment of AI models, reducing time-to-value for AI initiatives.
  2. Enhanced Collaboration: MLOps fosters better collaboration between data scientists, developers, and AI specialists, leading to more efficient project delivery.
  3. Improved Reproducibility: MLOps ensures that AI experiments are well-documented and reproducible, aiding in model auditing and compliance.
  4. Scalability: AI models can easily scale to handle varying workloads while maintaining performance and reliability.
  5. Ethical AI: MLOps prioritizes ethical AI usage, minimizing the risk of harmful or inappropriate AI-generated content.

Future Trends

The future of DevOps in AI and ML promises greater integration of machine learning, automation, and transparency. MLOps will become a standard practice, while AI-driven DevOps tools will optimize workflows, enhance security, and predict system behavior. Serverle

The Future of QA: Exploring Emerging Trends and Innovations in AI-Driven Software Testing

The IT industry is rapidly growing, and companies are under immense pressure to deliver high-quality software. Digital products, made up of millions of lines of code, are crucial for success. Testing enterprise applications is challenging due to the unique workflows of users, company regulations, and third-party systems influencing each application's design.

A recent Gartner report highlights the significant value of AI-integrated software testing. It boosts productivity by creating and managing test assets and provides early feedback on the quality of new releases to testing teams.

The increasing complexity of modern applications and the reliance on manual testing affect overall developer productivity, product reliability, stability, compliance, and operational efficiency. AI-augmented software testing solutions help teams gain confidence in their release candidates, enabling informed product releases.

The Evolution of Quality Assurance in the AI Era

Software development is dynamic, driven by technological advancements and customer demands for better solutions. Quality Assurance (QA) is crucial in ensuring that software products meet specific quality and performance standards. AI has recently transformed QA, enhancing efficiency, effectiveness, and speed. It's expected that AI will become standard in testing within a few years. Neural Networks, a machine learning technique, are used in automated QA testing to generate test cases and detect bugs automatically. AI also uses natural language processing (NLP) for requirements analysis.

AI in QA testing improves test coverage and accelerates issue detection. Combining AI and machine learning (ML) in testing enhances automation, improving the efficiency and accuracy of software testing processes. As organizations adopt AI in their QA, software engineering teams will benefit from integrating development environments (IDEs), DevOps platforms, and AI services like large language models (LLMs).

Automated Test Generation and Execution

AI creates test scenarios based on preset criteria and experience. Intelligent automatic scripts adapt to program changes, reducing the need for manual updates, which can become obsolete as applications evolve. For instance, if a component on a site is moved, self-healing tests will identify the new location and continue testing, significantly reducing cross-referencing time and increasing QA productivity.

Enhancing Test Accuracy with Predictive Analytics

Predictive analytics is transforming QA by forecasting future issues and vulnerabilities. It allows QA teams to address problems when they are still manageable, rather than when defects become extensive and require significant effort to fix. Predictive analytics helps QA teams focus on critical areas by estimating the likelihood of failure, ensuring QA efforts are effectively allocated.

Anomaly Detection and Risk-Based Testing

AI-driven risk-based testing examines the most critical and defect-prone components of a system. By focusing on these essential parts, significant risks are more likely to be addressed and avoided, improving software quality and the efficacy of QA methods.

Elevating Testing Quality with Generative AI

Generative AI (GenAI) shows great potential beyond simple test case generation and planning, enhancing overall testing quality and enabling complex testing scenarios. It improves efficiency, allowing testing teams to complete projects faster and take on additional tasks, thus increasing the company's value. GenAI enables QA teams to perform thorough quality checks on test cases and scripts, ensuring they are error-free and adhere to best practices. GenAI also develops and organizes complex data sets for realistic and robust experiments and prepares and executes advanced tests like stress and load testing. Leading tech companies like Facebook and Google’s DeepMind are already leveraging GenAI to improve bug detection, test coverage, and testing for machine learning systems.

Conclusion

Gartner predicts that by 2027, 80% of enterprises will integrate AI-supported testing solutions into their software development process, up from 15% in 2023. As AI continues to develop, we can expect significant breakthroughs in QA, revolutionizing software testing and ensuring the delivery of high-quality code.

Automated test generation and execution, predictive analytics, anomaly detection, and risk-based testing are critical advancements in quality assurance. By embracing these innovative trends, organizations can ensure