Build Your Dream With Autogen (2024)

The motivation - Is it possible to solve multi step tasks?

100 years ago, a guy called Francis Galton, ran an experiment, he asked 787 villagers to guess the weight of an Ox. Surprisingly none of them answered correctly,but when he averaged the different answers, the average was really close, how close? 10 pounds!

The idea of using LLMs as multi-agent systems involves deploying multiple LLMs that can interact with each other to achieve complex goals that a single model might not be able to handle alone. This approach leverages the skills and instructions for each agent to create a more capable and comprehensive system, imagine a purpose-built agents' fleet that can execute various complicated tasks.

While large language models (LLMs) demonstrate remarkable capabilities in a variety of applications, such as language generation, understanding, and reasoning, they struggle to provide accurate answers when faced with complicated tasks.

According to this research (More agents is all you need), the performance of large language models (LLMs) scales with the number of agents instantiated. This method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty.

Now that we understand the motivation, and the business value of solving complicated multi step tasks, let's build our dream team.

AutoGen provides a general conversation pattern called group chat, which involves more than two agents. The core idea of group chat is that all agents contribute to a single conversation thread and share the same context. This is useful for tasks that require collaboration among multiple agents.

Priya is the VP engineering of "Great Company", the company leadership would like to build a solution for the legal domain based on LLMs, before writing a single line of code, Priya would like to research what are the available open sources on GitHub:

"What are the 5 leading GitHub repositories on llm for the legal domain?"

Executing it on Google, Bing or another search engine will not provide a structured and accurate result.

Let's Build

We'll build a system of agents using the Autogen library. The agents include a human admin, developer, planner, code executor, and a quality assurance agent. Each agent is configured with a name, a role, and specific behaviors or responsibilities.

Build Your Dream With Autogen (1)Autogen Dream Team

Here's the final output:

Build Your Dream With Autogen (2)

Install

(AutoGen requiresPython>=3.8)

pip install pyautogen

Set your API Endpoint

Theconfig_list_from_jsonfunction loads a list of configurations from an environment variable or a json file.

import autogenfrom autogen.agentchat import ConversableAgent,UserProxyAgent,AssistantAgent,GroupChat,GroupChatManagerfrom autogen.oai.openai_utils import config_list_from_jsonimport osfrom dotenv import load_dotenvimport warningswarnings.filterwarnings('ignore')load_dotenv()config_list_gpt4 = config_list_from_json( "OAI_CONFIG_LIST", filter_dict={ "model": ["gpt4o"],# in this example we used gpt4 omni },)

It first looks for environment variable "OAI_CONFIG_LIST" which needs to be a valid json string. If that variable is not found, it then looks for a json file named "OAI_CONFIG_LIST". It filters the configs by models (you can filter by other keys as well).

You can set the value of config_list in any way you prefer.

Construct Agents

gpt4_config = { "cache_seed": 42, # change the cache_seed for different trials "temperature": 0, "config_list": config_list_gpt4, "timeout": 120,}

Let's build our team, this code is setting up the agents:

# User Proxy Agent user_proxy = UserProxyAgent( name="Admin", human_input_mode="ALWAYS", system_message="1. A human admin. 2. Interact with the team. 3. Plan execution needs to be approved by this Admin.", code_execution_config=False, llm_config=gpt4_config, description="""Call this Agent if: You need guidance. The program is not working as expected. You need api key DO NOT CALL THIS AGENT IF: You need to execute the code.""", ) # Assistant Agent - Developer developer = AssistantAgent( name="Developer", llm_config=gpt4_config, system_message="""You are an AI developer. You follow an approved plan, follow these guidelines: 1. You write python/shell code to solve tasks. 2. Wrap the code in a code block that specifies the script type. 3. The user can't modify your code. So do not suggest incomplete code which requires others to modify. 4. You should print the specific code you would like the executor to run. 5. Don't include multiple code blocks in one response. 6. If you need to import libraries, use ```bash pip install module_name```, please send a code block that installs these libraries and then send the script with the full implementation code 7. Check the execution result returned by the executor, If the result indicates there is an error, fix the error and output the code again 8. Do not show appreciation in your responses, say only what is necessary. 9. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try. """, description="""Call this Agent if: You need to write code. DO NOT CALL THIS AGENT IF: You need to execute the code.""", ) # Assistant Agent - Planner planner = AssistantAgent( name="Planner", #2. The research should be executed with code system_message="""You are an AI Planner, follow these guidelines: 1. Your plan should include 5 steps, you should provide a detailed plan to solve the task. 2. Post project review isn't needed. 3. Revise the plan based on feedback from admin and quality_assurance. 4. The plan should include the various team members, explain which step is performed by whom, for instance: the Developer should write code, the Executor should execute code, important do not include the admin in the tasks e.g ask the admin to research. 5. Do not show appreciation in your responses, say only what is necessary. 6. The final message should include an accurate answer to the user request """, llm_config=gpt4_config, description="""Call this Agent if: You need to build a plan. DO NOT CALL THIS AGENT IF: You need to execute the code.""", ) # User Proxy Agent - Executor executor = UserProxyAgent( name="Executor", system_message="1. You are the code executer. 2. Execute the code written by the developer and report the result.3. you should read the developer request and execute the required code", human_input_mode="NEVER", code_execution_config={ "last_n_messages": 20, "work_dir": "dream", "use_docker": True, }, description="""Call this Agent if: You need to execute the code written by the developer. You need to execute the last script. You have an import issue. DO NOT CALL THIS AGENT IF: You need to modify code""",)quality_assurance = AssistantAgent( name="Quality_assurance", system_message="""You are an AI Quality Assurance. Follow these instructions: 1. Double check the plan, 2. if there's a bug or error suggest a resolution 3. If the task is not solved, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach.""", llm_config=gpt4_config,)Group chat is a powerful conversation pattern, but it can be hard to control if the number of participating agents is large. AutoGen provides a way to constrain the selection of the next speaker by using the allowed_or_disallowed_speaker_transitions argument of the GroupChat class.allowed_transitions = { user_proxy: [ planner,quality_assurance], planner: [ user_proxy, developer, quality_assurance], developer: [executor,quality_assurance, user_proxy], executor: [developer], quality_assurance: [planner,developer,executor,user_proxy],}

Now we can instantiate the GroupChat:

system_message_manager="You are the manager of a research group your role is to manage the team and make sure the project is completed successfully."groupchat = GroupChat( agents=[user_proxy, developer, planner, executor, quality_assurance],allowed_or_disallowed_speaker_transitions=allowed_transitions, speaker_transitions_type="allowed", messages=[], max_round=30,send_introductions=True)manager = GroupChatManager(groupchat=groupchat, llm_config=gpt4_config, system_message=system_message_manager)

Sometimes it's a bit complicated to understand the relationship between the entities, here we print a graph representation of the code:

import networkx as nximport matplotlib.pyplot as pltG = nx.DiGraph()# Add nodesG.add_nodes_from([agent.name for agent in groupchat.agents])# Add edgesfor key, value in allowed_transitions.items(): for agent in value: G.add_edge(key.name, agent.name)# Set the figure sizeplt.figure(figsize=(12, 8))# Visualizepos = nx.spring_layout(G) # For consistent positioning# Draw nodes and edgesnx.draw_networkx_nodes(G, pos)nx.draw_networkx_edges(G, pos)# Draw labels below the nodeslabel_pos = {k: [v[0], v[1] - 0.1] for k, v in pos.items()} # Shift labels below the nodesnx.draw_networkx_labels(G, label_pos, verticalalignment='top', font_color="darkgreen")# Adding marginsax = plt.gca()ax.margins(0.1) # Increase the margin value if needed# Adding a dynamic titletotal_transitions = sum(len(v) for v in allowed_transitions.values())title = f'Agent Interactions: {len(groupchat.agents)} Agents, {total_transitions} Potential Transitions'plt.title(title)plt.show()

Build Your Dream With Autogen (3)

task1="what are the 5 leading GitHub repositories on llm for the legal domain?"chat_result=user_proxy.initiate_chat( manager, message=task1, clear_history=True)

Output:

Admin (to chat_manager):what are the 5 leading GitHub repositories on llm for the legal domain?--------------------------------------------------------------------------------Planner (to chat_manager):To identify the 5 leading GitHub repositories on large language models (LLM) for the legal domain, we will follow a structured plan. Here is the detailed plan:### Step 1: Define Search Criteria**Team Member:** Planner- Define the criteria for what constitutes a "leading" GitHub repository. This could include factors such as the number of stars, forks, recent activity, and relevance to the legal domain.### Step 2: Conduct Initial Search**Team Member:** Developer- Use GitHub's search functionality to find repositories related to LLMs in the legal domain.- Apply filters based on the criteria defined in Step 1.### Step 3: Compile and Analyze Data**Team Member:** Developer- Compile a list of the top 10 repositories based on the initial search.- Analyze these repositories to ensure they meet the criteria and are relevant to the legal domain.### Step 4: Narrow Down to Top 5**Team Member:** Developer- Narrow down the list from 10 to the top 5 repositories based on further analysis and comparison.### Step 5: Verify and Finalize**Team Member:** Quality Assurance- Double-check the final list of 5 repositories to ensure they meet all the criteria and are indeed leading repositories in the legal domain.- Provide a brief description of each repository and why it was selected.### Final List of 5 Leading GitHub Repositories on LLM for the Legal Domain1. **Repository Name:** [Link to Repository] - **Description:** Brief description of the repository and its relevance to the legal domain.2. **Repository Name:** [Link to Repository] - **Description:** Brief description of the repository and its relevance to the legal domain.3. **Repository Name:** [Link to Repository] - **Description:** Brief description of the repository and its relevance to the legal domain.4. **Repository Name:** [Link to Repository] - **Description:** Brief description of the repository and its relevance to the legal domain.5. **Repository Name:** [Link to Repository] - **Description:** Brief description of the repository and its relevance to the legal domain.This plan will ensure that we identify and verify the top 5 leading GitHub repositories on LLM for the legal domain.*****************
You can find the full output in the GitHub repo--------------------------------------------------------------------------------Quality_assurance (to chat_manager):### Final List of 5 Leading GitHub Repositories on LLM for the Legal Domain1. **Repository Name:** [lexpredict-lexnlp](https://github.com/LexPredict/lexpredict-lexnlp) - **Description:** LexNLP by LexPredict - **Stars:** 676 - **Forks:** 1742. **Repository Name:** [Blackstone](https://github.com/ICLRandD/Blackstone) - **Description:** A spaCy pipeline and model for NLP on unstructured legal text. - **Stars:** 632 - **Forks:** 1003. **Repository Name:** [Legal-Text-Analytics](https://github.com/Liquid-Legal-Institute/Legal-Text-Analytics) - **Description:** A list of selected resources, methods, and tools dedicated to Legal Text Analytics. - **Stars:** 563 - **Forks:** 1134. **Repository Name:** [2019Legal-AI-Challenge-Legal-Case-Element-Recognition-solution](https://github.com/wangxupeng/2019Legal-AI-Challenge-Legal-Case-Element-Recognition-solution) - **Description:** Completed this competition in collaboration with Jiang Yan and Guan Shuicheng. - **Stars:** 501 - **Forks:** 335. **Repository Name:** [DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM) - **Description:** DISC-LawLLM, an intelligent legal system utilizing large language models (LLMs) to provide a wide range of legal services. - **Stars:** 445 - **Forks:** 45### Verification and Finalization**Quality Assurance Task:**- **Double-check the final list:** Ensure that the repositories meet all the criteria and are indeed leading repositories in the legal domain.- **Provide a brief description:** Each repository has been described briefly, highlighting its relevance to the legal domain.The task is now complete, and the final list of leading GitHub repositories on LLM for the legal domain has been verified and finalized.

Summary & Next Steps

We have shown how to build a complex multi agent solution,this enhancement ensures that complex multi steps tasks can be solved with Autogen.

Now we can deploy this group to solve various business use cases like customer support, IT, finance and more.

1. Teachability -Teachability uses a vector database to give an agent the ability to remember user teachings, you can read more here.

2.Multimodal Conversable Agent - adding new modalities like image and audio.

3. Multi model - while GPT4 is a powerful model, open-source models like Phi-3 can solve different tasks, thus, implementing a differential routing (agent x-model y; agent z - model w).

Hope it was insightful, feel free to add comments/questions/GitHub stars :)

Please refer to this GitHub Repo for the full notebook

Build Your Dream With Autogen (2024)

References

Top Articles
Latest Posts
Article information

Author: Trent Wehner

Last Updated:

Views: 6393

Rating: 4.6 / 5 (76 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Trent Wehner

Birthday: 1993-03-14

Address: 872 Kevin Squares, New Codyville, AK 01785-0416

Phone: +18698800304764

Job: Senior Farming Developer

Hobby: Paintball, Calligraphy, Hunting, Flying disc, Lapidary, Rafting, Inline skating

Introduction: My name is Trent Wehner, I am a talented, brainy, zealous, light, funny, gleaming, attractive person who loves writing and wants to share my knowledge and understanding with you.