Free Self Study Lessons
Building your own AI Agent collectives with LLM orchestration and advanced RAG architectures
From concept to production with popular open source technologies
This membership program is structured as a learning environment that takes you through building a complete AI application using an open source technology foundation. The start of the program begins with a foundational course and community that covers an introduction to all of the topics necessary to develop an advanced AI application built on a robust agent architecture.
We start out by covering how to understand integrated AI capabilities and map them to organizational use cases, then setup and prepare an integrated development environment and CI/CD environment to deploy changes to a remote cloud environment.
Module 1: Define Product Scope and Architecture - This course module introduces students to Multi Agent AI systems, discusses the capabilities and limitations of LLMs, and presents RAG as a solution, while also defining key components of a RAG system, identifying use cases, and designing a modular architecture.
Module 2: Setting Up Development Environment - This course module covers setting up a development environment, including choosing a platform and tools, configuring the environment for optimal use, and acquiring and preparing data for use in development projects.
Module 3: Setting Up Hosting and CI/CD - This course module covers an introduction to Cloud Platforms for Deployment, the fundamentals of Containerization with Docker, and an in-depth exploration of Continuous Integration and Continuous Deployment (CI/CD) techniques for AI applications.
We then cover how to architect and develop a backend AI agent platform and ecosystem that integrates diverse data sources, and manages scheduled jobs and collaborative agents, as well as user interfaces and API platforms that allow integration with external services and power business automation.
Module 4: Building a Backend Data and AI Platform - This course module focuses on designing a data model for Retrieval-Augmented Generation (RAG), creating a retrieval component, integrating a Large Language Model (LLM) for generation, building APIs for data access and interaction, and incorporating semantic search and summarization to improve the accuracy and relevance of the data retrieved and the language model's responses.
Module 5: Integrating Data and Defining Events - This course module covers understanding Event-Driven Architectures and their benefits and challenges, building data pipelines for real-time updates, and triggering actions on the LLM platform based on events for real-time processing and increased automation.
Module 6: Building Front-End UX, API, and Event Generation - This course module covers designing intuitive user interfaces, constructing interactive components, developing a robust API for front-end integration, and generating events from user interactions.
Module 7: Building an Adaptive UX - This course module covers analyzing user behavior and preferences, personalizing the user experience through real-time context-aware adaptive UI elements, and integrating AI-driven algorithms to create adaptive UIs that learn from user interactions and preferences.
Finally, we cover how to keep the system running and troubleshoot issues in remote cloud environments, as well as secure the system.
Module 8: Managing Production Environment - This course module covers deploying to production with a well-defined and automated process, monitoring and logging for identifying and troubleshooting issues, scaling for increased traffic, implementing security best practices, and maintaining and updating the production environment with a focus on Retrieval-Augmented Generation (RAG) solutions.
To speed up the learning process we are building off an open source platform we have developed over the last six years that provides a multi-agent OS and project based RAG and LLM orchestration platform
The Nexical Core platform is a Python/Django-based AI powered Knowledge Explorer Interface, a Python/Django backend data integration and multi-agent operating system/development framework, and a Kubernetes and ArgoCD-based CI/CD-driven deployment platform.
Our open source platform is built for customizability and portability
Built completely on popular open source technologies you can use and evolve for free, which also allows you to dig deeper into implementation to learn concepts more effectively
A modular and adaptable architecture that allows you to reuse functionality across projects and swap out pluggable implementations, services, and interfaces
Backend engine and core knowledge interface are powered by REST and realtime command APIs that are easy to integrate with services you use
Our open source platform focuses on technical and data security
A platform you can host on any cloud provider that supports Kubernetes or you can host the platform locally, which makes developmet and testing easier
Team based access of AI knowledge-bases that allow for easy sharing of AI generated content across projects and teams
Team and role based access and integrated logging systems, which allow you to keep a historical record and audit user and platform events and commands
Address common questions ahead of time to save yourself an email.
Address common questions ahead of time to save yourself an email.
Address common questions ahead of time to save yourself an email.
Check out...
Video courses, membership community, and weekly QA sessions, and email support for learning how to build data integrated AI agent platforms
Weekly coaching and QA sessions, video courses, and membership community for learning how to build your own personalized data integrated AI agent platform projects