ai
839 TopicsAnnouncing the o1 model in Azure OpenAI Service, plus cool new fine-tuning advancements
The o1 model is coming soon to the Microsoft OpenAI Service. This model brings advanced capabilities and improvements that will enable developers to apply reasoning capabilities to tasks such as inventory management, customer support inquiries, financial analysis and more. Announcing the o1 model in Azure OpenAI Service Also, as we continue to push the boundaries of AI capabilities, we are thrilled to announce several new fine-tuning features in Azure OpenAI Service. Learn more about o-1 mini-reinforcement fine tuning (optimize model behavior in highly complex or dynamic environments), direct preference optimization (adjust model weights based on human preferences), prompt caching (reduce request latency and costs by reusing recently seen input tokens) and more! Introducing New Fine-tuning Techniques and Capabilities in Azure OpenAI Service86Views0likes0CommentsAI-Powered Chat with Azure Communication Services and Azure OpenAI
Many applications offer chat with automated capabilities but lack the depth to fully understand and address user needs. What if a chat app could not only connect people but also improve conversations with AI insights? Imagine detecting customer sentiment, bringing in experts as needed, and supporting global customers with real-time language translation. These aren’t hypothetical AI features, but ways you can enhance your chat apps using Azure Communication Services and Azure OpenAI today. In this blog post, we guide you through a quickstart available on GitHub for you to clone and try on your own. We highlight key features and functions, making it easy to follow along. Learn how to upgrade basic chat functionality using AI to analyze user sentiment, summarize conversations, and translate messages in real-time. Natural Language Processing for Chat Messages First, let’s go through the key features of this project. Chat Management: The Azure Communication Services Chat SDK enables you to manage chat threads and messages, including adding and removing participants in addition to sending messages. AI Integration: Use Azure OpenAI GPT models to perform: Sentiment Analysis: Determine if user chat messages are positive, negative, or neutral. Summarization: Get a summary of chat threads to understand the key points of a conversation. Translation: Translate into different languages. RESTful endpoints: Easily integrate these AI capabilities and chat management through RESTful endpoints. Event Handling (optional): Use Azure Event Grid to handle chat message events and trigger the AI processing. The starter code for the quickstart is designed to get you up and running quickly. After entering your Azure Communication Services and OpenAI credentials in the config file and running a few commands in your terminal, you can observe the features listed above in action. There are two main components to this example. The first is the ChatClient, which manages the capturing and sending of messages, via a basic chat application using Azure Communication Services. The second component, OpenAIClient, enhances your chat application by transmitting messages to Azure OpenAI along with instructions for the desired types of AI analysis. AI Analysis with OpenAIClient Azure OpenAI can perform a multitude of AI analyses, but this quickstart focuses on summarizing, sentiment analysis, and translation. To achieve this, we created three distinct prompts for each of the AI analysis we want to perform on our chat messages. These system prompts serve as the instructions for how Azure OpenAI should process the user messages. To summarize a message, we hard-coded a system prompt that says, “Act like you are an agent specialized in generating summary of a chat conversation, you will be provided with a JSON list of messages of a conversation, generate a summary for the conversation based on the content message.” Like the best LLM prompts, it’s clear, specific, and provides context for the inputs it will get. The system prompts for translating and sentiment analysis follow a similar pattern. The quickstart provides the basic architecture that enables you to take the chat content and pass it to Azure OpenAI for analysis. , and summarization. The Core Function: getChatCompletions The getChatCompletions function is a pivotal part of the AI chat sample project. It processes user messages from a chat application, sends them to the OpenAI service for analysis, and returns the AI-generated responses. Here’s a detailed breakdown of how it works: Parameters The getChatCompletions function takes in two required parameters: systemPrompt: A string that provides instructions or context to the AI model. This helps guide OpenAI to generate appropriate and relevant responses. userPrompt: A string that contains the actual message from the user. This is what the AI model analyzes and responds to. Deployment Name: The getChatCompletions function starts by retrieving the deployment name for the OpenAI model from the environment variables. Message Preparation: The function formats and prepares messages to send to OpenAI. This includes the system prompt with instructions for the AI model and user prompts that contain the actual chat messages. Sending to OpenAI: The function sends these prepared messages to the OpenAI service using the openAiClient’s getChatCompletions method. This method interacts with the OpenAI model to generate a response based on the provided prompts. Processing the Response: The function receives the response from OpenAI, extracts the AI-generated content, logs it, and returns it for further use. Explore and Customize the Quickstart The goal of the quickstart is to demonstrate how to connect a chat application and Azure OpenAI, then expand on the capabilities. To run this project locally, make sure you meet the prerequisites and follow the instructions in the GitHub repository. The system prompts and user messages are provided as samples for you experiment with. The sample chat interaction is quite pleasant. Feel free to play around with the system prompts and change the sample messages between fictional Bob and Alice in client.ts to something more hostile and see how the analysis changes. Below is an example of changing the sample messages and running the project again. Real-time messages For your chat application, you should analyze messages in real-time. This demo is designed to simulate that workflow for ease of setup, with messages sent through your local demo server. However, the GitHub repository for this quickstart project provides instructions for implementing this in your actual application. To analyze real-time messages, you can use Azure Event Grid to capture any messages sent to your Azure Communication Resource along with the necessary chat data. From there, you trigger the function that calls Azure OpenAI with the appropriate context and system prompts for the desired analysis. More information about setting up this workflow is available with "optional" tags in the quickstart's README on GitHub. Conclusion Integrating Azure Communication Services with Azure OpenAI enables you to enhance your chat applications with AI analysis and insights. This guide helps you set up a demo that shows sentiment analysis, translation, and summarization, improving user interactions and engagement. To dive deeper into the code, check out the Natural Language Processing of Chat Messages repository, and build your own AI-powered chat application today!Ignite Recap - What's new in ISV Success - AI benefits and more
At this year’s Ignite, we announced new benefits in ISV Success designed to support software companies as they build new AI powered apps and publish to the commercial marketplace. During this session, we also spoke with Mike Mason, Global Alliance Director from Varonis, as he shared how they have leaned into the commercial marketplace to grow their business. For additional details on eligibility for the new and expanded benefits, check out our previous post. Technical guidance to help you get started The team highlighted new opportunities for technical guidance focused on building AI powered solutions, including new AI Envisioning events. AI Envisioning Days offer sessions for both business and technical teams that help participants identify generative AI use cases and then build with a proven development framework. These events kicked off in November and are held monthly across 3 time zones. To take advantage of these sessions, you can register here. In addition to these events, ISV Success now offers software companies 1:1 AI consults tailored to their specific app development needs. Developer tools & resources to get to market faster ISV Success participants can now use Azure credits towards GitHub Copilot to accelerate development. GitHub Copilot is the most widely adopted AI development tool built by GitHub and OpenAI and offers code completion and automatic programming to help with repetitive tasks. Additionally, ISVs with certified software designations are now able to access financial incentives through ISV Success and Marketplace Rewards. These incentives are available through the new Advanced Package and offer up to $150K for AI and analytics projects and up to $50K to migrate end-customers to an ISV’s Azure-based solution. Richer benefits and accelerated rewards for AI projects ISVs currently building AI apps and intending to publish a transactable offer on the marketplace now qualify for the ISV Success expanded package. This package provides $25K in Azure to offset development costs and 50 hours of 1:1 technical consults. Once published, these ISVs will have the ability to unlock GTM benefits through Marketplace Rewards earlier, which include customer propensity scoring and Azure sponsorship. Best practices from Varonis’ Mike Mason During an interview on stage, Mike shared his advice for other software companies looking to grow their business on the Microsoft commercial marketplace. Meet customers where they are Varonis first moved to transact on marketplace over two years ago for a customer that wanted to use their Microsoft Azure Consumption Commitment (MACC) to purchase Varonis’ solution. Customers continue to indicate that they prefer the seamless procurement process and single invoice offered through marketplace transactions. Bring channel partners along Varonis is a channel first business and has participated in the marketplace MPO launches across the US and UK. For ISVs looking to sell alongside their partners in marketplace, Microsoft maintains an MPO list of eligible partners. If an ISV finds that their partner is not yet listed, they can email channelready@microsoft.com to get that partner added. Internal alignment and enablement is key Varonis has strategically brought along teams across the business – including Finance, Operations, Marketing and Sales – for a unified approach to marketplace that has contributed to their success. As the Global Alliance Director, Mike travels to each region and shares marketplace results and success stories. To watch the full “What’s new in ISV Success – AI benefits for software companies and more” session from Ignite 2024 click here. To learn more about the benefits of ISV Success, check out the ISV Hub. Ready to enroll in ISV Success? Visit the sign-up form.56Views0likes0CommentsIn case you missed it, recent Azure AI innovation announcements
In case you missed it, learn about recent AI announcements Microsoft made at Ignite 2024 including what’s new in Azure AI Governance, Azure AI Model Catalog, Azure AI Search, Fin-Tuning in Azure OpenAI Service, new partner models and more. Azure AI Governance, Risk, & Compliance AI reportswill help organizations improve cross-functional observability, collaboration and governance when developing and deploying generative AI (GenAI) apps and fine-tuned models. The Azure AI Foundry SDK and Azure AI Foundry portal will make it easier for organizations to create impact assessments for their AI apps by helping developers assemble key project details, such as model cards, model versions, content safety filter configurations and evaluation metrics, into a unified AI report. These reports can be exported to PDF or SPDX formats, helping development teams demonstrate production readiness within governance, risk and compliance (GRC) workflows and facilitate easier, ongoing audits of apps in production. This update will be in private preview next month. Risk and safety evaluations forimage contentwill help users assess the frequency and severity of harmful content in their app’s AI-generated outputs. Specifically, these evaluations will expand existing text-based evaluation capabilities in Azure AI to assess a broader set of interactions with GenAI, such as text inputs that yield image outputs, image and text inputs that yield text outputs, and images that contain text (i.e., memes) as inputs that yield text and/or image outputs. These evaluations will help organizations better understand potential risks and apply targeted mitigations, such as modifying multimodal content filters with Azure AI Content Safety, adjusting grounding data sources or updating their system message before deploying an app to production. Additional resources: Blog:Learn more about AI reports Blog:Learn more about risk and safety evaluations Azure AI Model Catalog The Azure AI model catalog is adding the latest AI models from leading innovators, enabling organizations to choose the right model for the right use case. Models from NTT DATA, (generally available) and Bria AI (in preview), help organizations bring generative AI capabilities to their apps, while industry-specific models will empower developers to pursue solutions specific to healthcare, agriculture, manufacturing and finance. Additional resources: Blog:Learn more about this news Azure Essentials Microsoft launchedAzure Essentialsto help customers improve the reliability, security and ongoing performance of their cloud and AI investments by providing a single place to access a comprehensive set of resources including tooling, skilling, guidance, reference architectures and best practices. Azure Essentials makes it possible to adopt AI at scale while aligning to Trustworthy AI principles and provides organizations with a clear path to maximize the value of their AI investment. AI scenario within the Cloud Adoption Framework equipstechnical decision-makers with prescriptive guidance to help prepare organizations to deploy AI workloads in production. The Cloud Adoption Framework methodologies have been adapted to Responsible AI principles so customers can build an AI foundation that supports the design, governance and ongoing management of AI workloads. It helps users with everything from developing an adoption strategy to managing AI workloads in production. AI workload within the Azure Well-Architected Framework supportsarchitects in decision-making when designing their AI workloads. This new guidance allows AI architects to meet the functional and non-functional requirements for reliability, security, performance efficiency, operational excellence and cost optimization. Additional resources: Blog:Learn more about this news Azure AI Search Updates to Azure AI Search, in preview, will help developers deliver better AI apps with improved retrieval augmented generation (RAG) performance. Query rewriting, available in preview, and semantic ranker are now powered by new, upgraded language models that deliver better responses and improved app experiences.In addition,Azure AI Search will soon be integrated with GitHub Models, enabling developers to explore and build a RAG application using a free AI Search index, directly from GitHub marketplace. Additional resources: Blog:Learn more about this news Fine-Tuning in AOAI Service New fine-tuning options in Azure OpenAI Service will enable developers and data scientists to customize models for their business needs. This will include support for fine-tuning GPT-4o and GPT-4o mini onProvisionedandGlobal Standard deployments, in preview next month. Additionally, developers will be able to leverage an end-to-end distillation workflow usingEvaluation, in preview, andStored Completions, in preview next month, to fine-tune cost-effective models, like GPT-4o mini with outputs from advanced models. Multimodal fine-tuning for GPT-4o with vision is now generally available. Additional resources: Blog:Learn more about this news Partner Models New partner-enabled, adapted AI models address industry-specific use cases to help organizations across industries transform and accelerate business outcomes. Through the Microsoft Cloud, Microsoft’s industry-specific AI capabilities and a trusted ecosystem of experienced partners, these new adapted AI models will empower customers to use AI technology to address their most pressing needs. Partners leveraging the power of Microsoft’s Phi family of small language models include: Bayer, a global enterprise with core competencies in the life science fields of healthcare and agriculture, makes L.Y. Crop Protection available in the Azure AI model catalog, for use by agronomic entities and their partners to advance agronomic knowledge and crop protection label compliance. Agronomists can use the model to enhance farmers’ decision-making processes, helping to drive more sustainable outcomes. Cerence, a global industry leader in creating unique, moving experiences for the mobility world, is enhancing its in-vehicle digital assistant technology with fine-tuned small language models (SLMs) within the vehicle’s hardware. The Cerence CaLLM Edge model, available in the Azure AI model catalog, can be used for in-car controls, such as adjusting air conditioning systems, and scenarios that involve limited or no cloud connectivity. Rockwell Automation, a global leader in industrial automation and digital transformation, will provide industrial AI expertise via the Azure AI model catalog. The FT Optix Food & Beverage model brings the benefits of industry-specific capabilities to frontline workers in manufacturing, supporting asset troubleshooting in the food and beverage domain. Saifr, a RegTech within Fidelity Investment’s innovation incubator, Fidelity Labs, introduces four new models in the Azure AI model catalog, empowering financial institutions to better manage regulatory compliance of broker-dealer communications and investment adviser advertising. The Retail Marketing Compliance model can help ensure marketing materials adhere to industry regulations and standards, while the Risk Interpretation model identifies and helps users understand potential risks in marketing content. The Language Suggestion model provides language suggestions to enhance the compliance of marketing messages, and the Image Detection model assists users with analyzing and verifying the appropriateness of images used in marketing campaigns. Siemens Digital Industries Software, which helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform, is introducing a new copilot for NX X software. It leverages an adapted AI model that enables users to ask natural language questions, access detailed technical insights and streamline complex design tasks for faster and smarter product development. Sight Machine, a leader in data-driven manufacturing and industrial AI, will release the Factory Namespace Manager to the Azure AI model catalog. The model helps manufacturers rename and integrate factory data with their corporate data systems, enabling them to analyze and optimize production alongside supply chain, sales, finance and other corporate functions. Additional resources: Blog:Learn more about this news30Views1like0CommentsExpert Insights: AI PCs and your technology strategy with Microsoft, Intel, and Forrester
Workplace AI will soon be as common as word processors and spreadsheets. Tangible AI benefits like better decision making, increased productivity, and better security will soon become must-haves for every business. Early movers have an opportunity to gain a competitive advantage. But doing so requires a strategic approach to AI adoption that takes advantage of technological advancements early—such as laptops and 2-in-1s with breakthrough AI capabilities. These devices are now easy for any business to obtain in the form of AI PCs from Microsoft Surface. Because they contain a new kind of processor called an NPU, they can run AI experiences directly on the device. Just as CPU and GPU work together to run business applications, the NPU adds power-efficient AI processing for new and potentially game-changing experiences that complement those delivered from the cloud. In a recent Microsoft webinar with experts from Forrester and Intel, leaders discussed how a thoughtful AI device strategy fuels operational success and positions organizations for sustained growth. In this blog post, we’ll examine a few key areas of AI device strategy. For more, watch the full webinar here: How device choice impacts your AI adoption strategy Focusing on high-impact roles An effective AI device strategy requires organizations to identify roles that gain the most value from AI capabilities. Data-centric functions—such as developers, analysts, and creative teams—depend on high-speed data processing, and AI-ready devices help these employees manage complex workflows, automate repetitive tasks, and visualize data-driven insights in real time. Choosing AI-enabled endpoints is not just about the NPU. High-resolution displays and optimized screen ratios, for example, support high-impact roles by providing ample workspace for AI-assisted analysis, modeling, and design work. Starting with on-device AI for these functions helps drive rapid value and motivates other teams to see the potential in AI-powered workflows. The phased rollout of AI devices builds a foundation for broader AI integration. Data governance remains central to technology’s advantage Data privacy and security enable confident adoption of AI tools. One benefit of devices with NPUs is that they allow AI to be used in scenarios where sending data to the cloud is not feasible. It’s also important to consider the general security posture enabled by a device. Hardware-based security features such as TPM 2.0 and biometric authentication help protect device integrity, supporting AI usage within a secure framework. With built-in protections that include hardware encryption, secure user authentication options, and advanced firmware defenses, AI-enabled devices create a trusted environment that upholds privacy standards and aligns with organizational compliance requirements. Choosing devices like Microsoft Surface that fit seamlessly into a wide range of device management setups supports faster adoption and reduces risk. Balancing advanced AI features with stable performance AI-enabled devices bring unique processing capabilities that don’t compromise the reliability of core functions. Specialized processors dedicated to AI workloads manage intensive tasks without drawing from the main CPU, preserving battery life and maintaining consistent performance. This balanced approach supports both advanced AI capabilities and essential day-to-day operations, providing employees with stable, responsive tools that adapt to their needs. AI-driven interactions, like responsive touch, intuitive inking, and enhanced image processing, further improve user experience. High-quality cameras and intelligent audio capture, for instance, optimize interactions in virtual meetings and collaboration, making these devices versatile and effective across different work scenarios. By focusing on the user experience, organizations empower teams to take full advantage of technology without a steep learning curve. Aligning IT and business goals for an effective AI strategy A strong AI device strategy brings together IT priorities and broader business objectives. While IT teams focus on security, manageability, and integration with existing infrastructure, business leaders aim to increase efficiency and support innovation. Aligning these goals enables a smooth AI adoption process, allowing organizations to leverage AI’s capabilities while meeting essential technical requirements. Strategically investing in devices with integrated security and manageability features, such as remote management of device settings and firmware updates, gives IT greater control over deployment and maintenance. This integrated approach allows organizations to keep their AI device strategy aligned with long-term goals, reducing the need for costly upgrades and enabling teams to work within a secure, adaptable tech environment. Supporting employee workflows with AI tools AI-enabled devices enhance productivity by automating repetitive tasks and giving employees more time to focus on high-value work. Tools like intelligent personal assistants and voice-driven commands support employees by streamlining tasks that would otherwise require manual effort. Enhanced typing experiences and personalized touch interactions improve user engagement, making AI tools easier to integrate into everyday workflows. With customizable features and inclusive design options, AI-enabled devices make advanced technology accessible to all team members, increasing satisfaction and reducing turnover. By enabling employees to focus on higher-level work, organizations can create an environment that supports meaningful productivity and helps retain talent. Proactive IT management with AI-driven insights Beyond the device, AI also offers new capabilities for device management, allowing IT teams to proactively monitor and resolve potential issues. By analyzing device usage patterns, AI can detect anomalies early, enabling IT to address risks before they impact employees. This shift from reactive to proactive management improves device reliability and reduces downtime, freeing IT resources to focus on broader strategic initiatives. Integrated AI security tools also improve protection, identifying threats as they emerge and securing devices with minimal manual intervention. With insights derived from AI-driven monitoring, IT teams can maintain secure, reliable systems that enhance overall operational stability. Crafting a forward-looking AI device strategy A structured AI device strategy prioritizes both immediate and long-term ROI by examining where new technology can have the greatest impact while also enhancing existing capabilities. By acting early, organizations position themselves to gain speed with AI and adopt the latest advancements as they are released. Whether you’re beginning with AI or looking to expand its role, a well-designed AI device strategy keeps your organization prepared for growth. To explore how AI-enabled devices can drive your team’s success, gain insights from experts at Forrester and Intel by watching the webinar: How device choice impacts your AI adoption strategy.52Views0likes0Comments