Latest Issue:
2025 - Jan/Feb
Sahil Malik explores the feasibility of creating a sophisticated AI digital assistant akin to HAL 9000 from "2001: A Space Odyssey," all while ensuring the system operates offline on a commercially available MacBook Pro. Paul Sheriff teaches you about the Model-View-View-Model (MVVM) and Dependency Injection (DI) design patterns to make reusable, maintainable, and testable applications. Joydip Kanjilal explores the Command Query Responsibility Segregation (CQRS) design pattern and its application in microservices architectures built with ASP.NET Core. In our cover story, Wei-Meng Lee provides an in-depth guide to using the LangChain framework for building applications incorporating large language models (LLMs). Jason Murphy details his development of an automated system leveraging AI to generate immersive and tailored RPG encounters. Mike Yeager recounts his transition to a Snapdragon-powered Copilot+ PC ARM computer, emphasizing how its Hexagon NPU significantly enhances AI tasks.
Articles in the Latest Issue:
-
AI is Stimulating Change
Rod explores the rapidly evolving landscape of artificial intelligence (AI) and machine learning (ML), emphasizing their increasing accessibility to smaller developers and companies. He notes that AI tools like Whisper and LangChain can enhance application functionalities while acknowledging the accompanying challenges, such as the potential for misinformation and the need for human oversight to ensure accuracy. Through CODE Magazine, Rod hopes to guide readers in integrating AI into their work responsibly, underscoring the importance of evaluating AI's suitability and potential risks. He advocates for a cautious and informed approach akin to "Trust but Verify."
-
Building HAL 9000 (And It Runs Completely on My Mac)
Sahil Malik explores the feasibility of creating a sophisticated AI digital assistant akin to HAL 9000 from "2001: A Space Odyssey," all while ensuring the system operates offline on a commercially available MacBook Pro. Sahil details the process of constructing this AI system, leveraging tools like Hugging Face for model sourcing, OpenAI's Whisper for speech recognition, and the Gemma language model for processing and generating coherent, context-aware responses. He shares the technical steps, including code snippets and required components, illustrating that what was once science fiction is now achievable with modern technology. This project not only emphasizes the advancements in AI but also showcases how these technologies can be harnessed for practical and experimental applications by enthusiasts with access to standard consumer hardware.
-
Exploring .NET MAUI: MVVM, DI, and Commanding
In this fourth entry in his series on MAUI, Paul teaches you about the Model-View-View-Model (MVVM) and Dependency Injection (DI) design patterns to make reusable, maintainable, and testable applications. You’ll also learn how to make your code-behind more efficient using Commanding.
-
Building Microservices Architecture Using CQRS and ASP.NET Core
Joydip Kanjilal explores the Command Query Responsibility Segregation (CQRS) design pattern and its application in microservices architectures built with ASP.NET Core. You'll learn the benefits of using CQRS, including scalability, performance optimization, and maintainability, by isolating read and write operations. Kanjilal provides a comprehensive guide on implementing CQRS with code examples, focusing on creating, updating, and deleting operations using ASP.NET Core Web API, MediatR for mediation, and Entity Framework Core.
-
Exploring LangChain: A Practical Approach to Language Models and Retrieval-Augmented Generation (RAG)
Wei-Meng Lee provides an in-depth guide to using the LangChain framework for building applications incorporating large language models (LLMs). He emphasizes LangChain's modular design, enabling developers to create complex workflows with customizable components and integrate external data sources, facilitating Retrieval-Augmented Generation. Key topics covered include constructing chains, maintaining conversational context with memory management, and leveraging Microsoft and Hugging Face's models for enhanced flexibility and privacy. Wei-Meng also demonstrates implementing RAG for document-based querying, offering a comprehensive overview of LangChain's capabilities for developing dynamic, data-driven solutions.
-
Semantic Kernal Part 4: Agents
In "Semantic Kernel Part 4: Agents," Mike Yeager explores the use of agents within Semantic Kernel (SK) to tackle complex tasks by customizing Large Language Models (LLMs). Mike explains that agents function as specialized LLMs with specific capabilities, such as performing calculations or accessing tools like MATLAB, to produce more accurate and specialized outcomes. You'll learn about creating assistant agents for specific use cases, like tax calculation, and chat agents that collaborate to achieve tasks, exemplified through a software development scenario. These agents simplify processes by being modular and pre-configured, allowing developers to build more extensive, manageable systems while treating agents as source code.
-
The Infinite Monster Engine
Jason Murphy explores the evolution of his fascination with tabletop role-playing games, from his improvisational beginnings as a Dungeon Master to creating a sophisticated encounter builder using Large Language Models (LLMs). He details his development of an automated system leveraging AI to generate immersive and tailored RPG encounters. By instructing an LLM and refining its responses, Murphy effectively reduces preparation time, enabling more dynamic game sessions. The article underscores the potential of AI in enhancing the gaming experience, while providing practical guidance on prompt engineering and technical setup for building a functional RPG encounter generator.
-
My New Copilot+ PC
Mike recounts his transition to a Snapdragon-powered Copilot+ PC ARM computer, emphasizing how its Hexagon NPU significantly enhances AI tasks. Despite minor software compatibility issues and delayed AI features like Phi Silica and Recall, Mike appreciates the device's speed and efficiency for development tasks, such as using Visual Studio and engaging in programming on Windows ARM64. He experiments with AI capabilities using ONNX models and remains optimistic about the burgeoning AI functionalities as they evolve, acknowledging that while the hardware is promising, its full potential is yet to be realized.
Search CODE Magazine Content
Advertise in CODE Magazine
Click here to find out more about our affordable advertising opportunities!