BMC is enhancing mainframe management by introducing generative artificial intelligence (AI) assistants. One of their latest tools, available in beta, helps explain code functionality.
John McKenny, the senior vice president and general manager for Intelligent Z Optimization and Transformation at BMC, announced the launch of BMC Automated Mainframe Intelligence (AMI) DevX Code Insights. This tool, accessible via a chat interface, assists in debugging code written in various languages, understanding system processes, and making more informed decisions.
BMC is training large language models (LLMs) for this purpose. The AMI DevX Code Insights is just one of several AI agents BMC plans to offer through a unified console. These LLMs might be developed by BMC or based on third-party platforms, and organizations can even use their custom-built LLMs.
BMC is also inviting organizations to join a Design Program, granting access to new generative AI features as they develop. These AI agents will act like subject matter experts (SMEs) for specific tasks, offering more than just prompt-based question-answering. They will provide insights and guidance to streamline workflows.
Generative AI is a key part of BMC’s long-term strategy to simplify mainframe management over the next three decades. For example, developers will soon be able to use a service catalog to meet their requirements independently, reducing their reliance on IT operations teams.
The ultimate aim is to make mainframes as easy to manage as any distributed computing platform, requiring fewer specialized skills. While BMC is not alone in this endeavor, the rise of generative AI will significantly speed up the process.
This advancement is particularly important as more AI models are deployed on mainframe platforms, which already house vast amounts of data. It's generally more efficient to bring AI to existing data rather than moving data to new platforms.
IT teams may also need to reconsider which workloads run on which platforms. Although not all organizations use mainframes, those that do can lower overall IT costs by consolidating more workloads on their mainframes, thanks to specialized mainframe licenses.
In summary, as AI simplifies workload management regardless of location, IT teams will likely become more uniform over time.
OpenAI, known for its groundbreaking ChatGPT, which debuted in November 2022, is now venturing into the search engine arena with the launch of SearchGPT. This new AI-powered search tool is designed to provide real-time information and is currently available as a prototype.
What is SearchGPT?
SearchGPT allows users to enter queries like any typical search engine, but it stands out by delivering conversational responses that include up-to-date information sourced from the web. This approach offers a more interactive experience compared to traditional search engines.
Features of SearchGPT
Similar to the "Browse" feature in ChatGPT, SearchGPT includes links to the original sources of its information, allowing users to easily verify facts and delve deeper into topics. If users prefer a more traditional search results layout, they can click the "link" icon on the left sidebar to see a list of relevant webpages alongside the conversational response.
We’re testing SearchGPT, a temporary prototype of new AI search features that give you fast and timely answers with clear and relevant sources.
— OpenAI (@OpenAI) July 25, 2024
We’re launching with a small group of users for feedback and plan to integrate the experience into ChatGPT. https://t.co/dRRnxXVlGh pic.twitter.com/iQpADXmllH
One of the key features of SearchGPT is its ability to handle follow-up questions, making it easier for users to refine their search without starting over. This prototype is currently being tested by around 10,000 users and publishers, with OpenAI gathering feedback to improve the service. Interested users can join a waitlist to try it out.
Support for Publishers
To address concerns that AI search engines might reduce traffic to publisher websites, OpenAI emphasizes that SearchGPT is designed to promote proper attribution and linking to original sources. Publishers can also control how their content appears in SearchGPT and opt-out of having their content used for training OpenAI's models while still appearing in search results.
The Future of SearchGPT and ChatGPT
OpenAI plans to integrate the best features of SearchGPT into ChatGPT, enhancing the chatbot's capabilities by combining conversational responses with search functionality. This could provide a compelling alternative to traditional search engines like Google, which currently holds a dominant 91% market share according to StatCounter.
Other companies, such as Microsoft with its Copilot in Bing and Perplexity, an AI-powered search engine, are also exploring the integration of generative AI into search. While these efforts have gained traction, with Bing reaching 140 million daily active users and Perplexity being valued at $1 billion, they have not yet posed a significant challenge to Google's dominance.
Google, meanwhile, continues to innovate in response to the growing interest in AI. The company introduced its Search Generative Experience (SGE) at Google I/O 2023, and expanded the use of AI-generated overviews in 2024, though it has had to adjust these features based on user feedback.
For now, OpenAI's SearchGPT is a promising addition to the evolving landscape of AI and search technology, offering a new way to access and interact with information online.
In today's tech world, DevOps is known for its ability to streamline development and operations. However, when it comes to machine learning (ML) and artificial intelligence (AI), traditional DevOps practices encounter unique challenges. Enter MLOps—a specialized approach that bridges the gap between data science, operations, and innovative AI applications. MLOps helps organizations efficiently develop, deploy, and manage ML and AI models, seamlessly integrating data-driven intelligence into their workflows.
Developing and deploying ML and AI models bring complexities that challenge traditional DevOps methods:
MLOps integrates ML systems into the broader DevOps workflow, uniting data science and operations teams to streamline the ML lifecycle:
Adopting MLOps in ML and AI projects offers numerous benefits:
The future of DevOps in AI and ML promises greater integration of machine learning, automation, and transparency. MLOps will become a standard practice, while AI-driven DevOps tools will optimize workflows, enhance security, and predict system behavior. Serverle
The IT industry is rapidly growing, and companies are under immense pressure to deliver high-quality software. Digital products, made up of millions of lines of code, are crucial for success. Testing enterprise applications is challenging due to the unique workflows of users, company regulations, and third-party systems influencing each application's design.
A recent Gartner report highlights the significant value of AI-integrated software testing. It boosts productivity by creating and managing test assets and provides early feedback on the quality of new releases to testing teams.
The increasing complexity of modern applications and the reliance on manual testing affect overall developer productivity, product reliability, stability, compliance, and operational efficiency. AI-augmented software testing solutions help teams gain confidence in their release candidates, enabling informed product releases.
Software development is dynamic, driven by technological advancements and customer demands for better solutions. Quality Assurance (QA) is crucial in ensuring that software products meet specific quality and performance standards. AI has recently transformed QA, enhancing efficiency, effectiveness, and speed. It's expected that AI will become standard in testing within a few years. Neural Networks, a machine learning technique, are used in automated QA testing to generate test cases and detect bugs automatically. AI also uses natural language processing (NLP) for requirements analysis.
AI in QA testing improves test coverage and accelerates issue detection. Combining AI and machine learning (ML) in testing enhances automation, improving the efficiency and accuracy of software testing processes. As organizations adopt AI in their QA, software engineering teams will benefit from integrating development environments (IDEs), DevOps platforms, and AI services like large language models (LLMs).
AI creates test scenarios based on preset criteria and experience. Intelligent automatic scripts adapt to program changes, reducing the need for manual updates, which can become obsolete as applications evolve. For instance, if a component on a site is moved, self-healing tests will identify the new location and continue testing, significantly reducing cross-referencing time and increasing QA productivity.
Predictive analytics is transforming QA by forecasting future issues and vulnerabilities. It allows QA teams to address problems when they are still manageable, rather than when defects become extensive and require significant effort to fix. Predictive analytics helps QA teams focus on critical areas by estimating the likelihood of failure, ensuring QA efforts are effectively allocated.
AI-driven risk-based testing examines the most critical and defect-prone components of a system. By focusing on these essential parts, significant risks are more likely to be addressed and avoided, improving software quality and the efficacy of QA methods.
Generative AI (GenAI) shows great potential beyond simple test case generation and planning, enhancing overall testing quality and enabling complex testing scenarios. It improves efficiency, allowing testing teams to complete projects faster and take on additional tasks, thus increasing the company's value. GenAI enables QA teams to perform thorough quality checks on test cases and scripts, ensuring they are error-free and adhere to best practices. GenAI also develops and organizes complex data sets for realistic and robust experiments and prepares and executes advanced tests like stress and load testing. Leading tech companies like Facebook and Google’s DeepMind are already leveraging GenAI to improve bug detection, test coverage, and testing for machine learning systems.
Gartner predicts that by 2027, 80% of enterprises will integrate AI-supported testing solutions into their software development process, up from 15% in 2023. As AI continues to develop, we can expect significant breakthroughs in QA, revolutionizing software testing and ensuring the delivery of high-quality code.
Automated test generation and execution, predictive analytics, anomaly detection, and risk-based testing are critical advancements in quality assurance. By embracing these innovative trends, organizations can ensure