Meta AI on WhatsApp is set for a major upgrade, transforming it into a personalized assistant tailored to user preferences. This exciting new update is likely to be powered by a Chat Memory feature, which is currently under development. This feature aims to deliver more relevant and customized responses based on user interactions with Meta AI.
Also Read | Meta Layoffs: Jobs Cut Across WhatsApp, Instagram, and Reality Labs
According to a report from WABetaInfo, the Chat Memory feature will enable Meta AI to retain certain details from conversations, such as personal interests, dietary preferences, birthdays, allergies, and even your preferred conversation style. With this stored information, Meta AI can offer personalized responses that align with the user’s lifestyle and preferences.
For example, if you ask Meta AI for food suggestions, it could use your saved information to automatically filter out dishes you dislike or are allergic to.
Also Read | WhatsApp Introduces New Low Light Mode for Video Calls: Here's How to Enable It
A screenshot shared by WABetaInfo suggests that Meta AI will come with a disclaimer stating that it “automatically remembers certain parts of your chat to provide more relevant responses." Users will also have the option to instruct Meta AI to remember specific details using the "remember this" command.
For those concerned about privacy, WhatsApp will give users full control over what information Meta AI retains. You will have the option to update or delete this information at any time, ensuring your data remains in your control.
WABetaInfo discovered the Chat Memory feature in the WhatsApp beta for Android version 2.24.22.9. Although there has been no official announcement from Meta, WhatsApp is expected to roll out the update to a broader audience soon. However, this feature is still not available in our independent testing of the WhatsApp beta version.
---------------------------------------------------------------------------------------------------
For More Details About Full Stack Courses : Visit Here
Register For Free Demo on UpComing Batches : Click Here
BMC Software announced a new generative AI assistant at its Connect 2024, said to make it easier to add AI capabilities inside mainframe environments.
The AI assistant is part of the BMC Automated Mainframe Intelligence (AMI) framework, used in automating mundane tasks. BMC AMI Assistant helps take advantage of tools such as BMC AMI DevX Code Insights, that explains how a piece of mainframe code is structured and works.
In addition to this, BMC AMI Assistant will be integrated with BMC AMI Ops Insights. Here, the beta tool utilizes LLMs to enhance and optimize mainframe applications.
These systems become more accessible for the new generation of IT specialists as the number of AI tools grows," Priya Doty, vice president of Industry Solutions Marketing at BMC, added, saying the assistant will preserve crucial knowledge about mainframe systems, especially as professionals retire.
According to Chief Technology Advisor for The Futurum Group Steven Dickens, many mainframe applications are those versions built on old, poorly documented code. Updating and modernizing such essential systems becomes much easier with the introduction of AI.
One of the AI-powered tools BMC AMI DevX Code Insights is part of many unified consoles that BMC aims to provide. In fact, these AI models can either be developed in-house by BMC or on top of third-party platforms; additionally, they may include the organization's own LLMs if needed.
Earlier this year, BMC invited organizations to join its Design Program, with early access to new generative AI features as they're rolled out. Ultimately, managing one's mainframe will be done much like any other distributed computing platform, eliminating much of the need for specialty skills.
This shift in AI usage will therefore change the IT team structures and roles to allow the creation and deployment of applications across mainframes and other platforms, as cross-functional teams of developers will be able to build and deploy applications.
Meanwhile, mainframe usage is up-and-coming as more Java and Python applications are landing on these platforms. Python, in particular, will likely serve as the catalyst for new workloads of high-performance real-time AI analysis of transactions that their mainframes deal with to preserve their unparalleled performance at scale. Increasingly diverse workloads' management is challenging going forward, even decades after being hailed as also-ran relics destined for obscurity since enterprises remain critically involved with the mainframes.
---------------------------------------------------------------------------------------------------
For More Details About Full Stack Courses : Visit Here
Register For Free Demo on UpComing Batches : Click Here
Microsoft today announced that Sebastien Bubeck, its vice president of generative AI research, is departing to take a comparable job at OpenAI, the AI startup behind ChatGPT.
Details about Bubeck's new position at OpenAI were not shared but a Microsoft spokesperson said he is leaving, stating, "Sebastien has decided to leave Microsoft to continue his efforts in the pursuit of Artificial General Intelligence (AGI)." Sebastien Bubeck A spokesperson made sure to note that the company would be looking forward to continued collaboration through his work at OpenAI, which is backed by Microsoft.
Bubeck didn't comment on a request. His resignation comes after a couple of high-profile departures from OpenAI over the past few months. Among them is former CTO Mira Murati, who left in September. Even with this series of changes in personnel, the company's restructuring efforts had nothing to do with the actions, according to OpenAI Chief Executive Officer Sam Altman.
Another significant change occurred in the AI research landscape as experts like Bubeck move from one leader to another in AI research organizations.
---------------------------------------------------------------------------------------------------
For More Details About Full Stack Courses : Visit Here
Register For Free Demo on UpComing Batches : Click Here
"Meta’s new Llama 3.2 model, available in 11-billion and 90-billion parameter variants, can process images and text. It understands charts, captions images, and locates objects from natural language prompts."
Meta has launched its latest AI model, Llama 3.2, marking a significant step in the open-source AI space. This release introduces models with both image and text processing abilities, following two months after their last major AI update.
Llama 3.2 comes in several variants, including 11-billion and 90-billion parameter models that are designed for advanced visual tasks. These vision models can read charts, caption images, and identify objects using natural language commands. The larger, 90-billion parameter model is even more advanced, capable of extracting details from images to create precise captions. For developers looking for AI solutions that handle text, there are also smaller models with 1-billion and 3-billion parameters, designed to work efficiently on mobile devices and edge hardware.
These models are particularly useful for developers creating AI-powered applications, such as augmented reality apps that can understand videos in real-time, visual search engines, and document analysis tools. The text-only models are optimized for devices running on Qualcomm and MediaTek hardware, enabling features like summarizing conversations, scheduling meetings, and building personalized AI apps.
Although Meta is still catching up with competitors like Anthropic and OpenAI in the multimodal AI space, Llama 3.2 offers strong competition. Meta claims that Llama 3.2 performs better than Gemma and Phi 3.5-mini in tasks such as rewriting prompts, following instructions, and summarizing content.
This new release reinforces Meta’s commitment to pushing forward in AI development, providing versatile tools for developers to build next-gen AI applications.
----------------------------------------------------------------------------------------------------
For More Details About Full Stack Courses : Visit Here
Register For Free Demo on UpComing Batches : Click Here