Running real inference at scale? Apply for our limited $10K credit program — Find out more
Build an AI agent with LangChain and Friendli Serverless Endpoints, integrating tool calling for dynamic and efficient responses.
pip install -qU langchain-openai langchain-community langchain wikipedia
import getpass
import os
if not os.environ.get("FRIENDLI_TOKEN"):
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Enter your Friendli Token: ")
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="meta-llama-3.1-8b-instruct",
base_url="https://api.friendli.ai/serverless/v1",
api_key=os.environ["FRIENDLI_TOKEN"],
)
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
api_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)
wiki = WikipediaQueryRun(api_wrapper=api_wrapper)
tools = [wiki]
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history"),
("user", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
prompt.messages
from langchain.agents import AgentExecutor
from langchain.agents import create_tool_calling_agent
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
chat_history = []
while True:
user_input = input("Enter your message: ")
result = agent_executor.invoke(
{"input": user_input, "chat_history": chat_history},
)
chat_history.append({"role": "user", "content": user_input})
chat_history.append({"role": "assistant", "content": result["output"]})
Enter your Friendli Token: ··········
Enter your message: hello
> Entering new AgentExecutor chain...
Hello, it's nice to meet you. I'm here to help with any questions or topics you'd like to discuss. Is there something in particular you'd like to talk about, or do you need assistance with something?
> Finished chain.
Enter your message: What does the Linux kernel do?
> Entering new AgentExecutor chain...
Invoking: `wikipedia` with `{'query': 'Linux kernel'}`
responded: The Linux kernel is the core component of the Linux operating system. It acts as a bridge between the computer hardware and the user space applications. The kernel manages the system's hardware resources, such as memory, CPU, and I/O devices. It provides a set of interfaces and APIs that allow user space applications to interact with the hardware.
Page: Linux kernel
Summary: The Linux kernel is a free and open source,: 4 UNIX-like kernel that isThe Linux kernel is a free and open source, UNIX-like kernel that is responsible for managing the system's hardware resources, such as memory, CPU, and I/O devices. It provides a set of interfaces and APIs that allow user space applications to interact with the hardware. The kernel is the core component of the Linux operating system, and it plays a crucial role in ensuring the stability and security of the system.
> Finished chain.
Enter your message:
import getpass
import os
from langchain_openai import ChatOpenAI
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents import AgentExecutor
from langchain.agents import create_tool_calling_agent
if not os.environ.get("FRIENDLI_TOKEN"):
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Enter your Friendli Token: ")
llm = ChatOpenAI(
model="meta-llama-3.1-8b-instruct",
base_url="https://api.friendli.ai/serverless/v1",
api_key=os.environ["FRIENDLI_TOKEN"],
)
api_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)
wiki = WikipediaQueryRun(api_wrapper=api_wrapper)
tools = [wiki]
# Get the prompt to use - you can modify this!
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history"),
("user", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
chat_history = []
while True:
user_input = input("Enter your message: ")
result = agent_executor.invoke(
{"input": user_input, "chat_history": chat_history},
)
chat_history.append({"role": "user", "content": user_input})
chat_history.append({"role": "assistant", "content": result["output"]})
Was this page helpful?